2026-02-28 00:00:10.092017 | Job console starting 2026-02-28 00:00:10.129534 | Updating git repos 2026-02-28 00:00:10.257280 | Cloning repos into workspace 2026-02-28 00:00:10.582359 | Restoring repo states 2026-02-28 00:00:10.603603 | Merging changes 2026-02-28 00:00:10.603626 | Checking out repos 2026-02-28 00:00:11.097654 | Preparing playbooks 2026-02-28 00:00:12.433953 | Running Ansible setup 2026-02-28 00:00:22.013297 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-28 00:00:23.267286 | 2026-02-28 00:00:23.267454 | PLAY [Base pre] 2026-02-28 00:00:23.304606 | 2026-02-28 00:00:23.304719 | TASK [Setup log path fact] 2026-02-28 00:00:23.332429 | orchestrator | ok 2026-02-28 00:00:23.360444 | 2026-02-28 00:00:23.360568 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-28 00:00:23.418474 | orchestrator | ok 2026-02-28 00:00:23.435426 | 2026-02-28 00:00:23.435564 | TASK [emit-job-header : Print job information] 2026-02-28 00:00:23.515365 | # Job Information 2026-02-28 00:00:23.515610 | Ansible Version: 2.16.14 2026-02-28 00:00:23.515649 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-02-28 00:00:23.515680 | Pipeline: periodic-midnight 2026-02-28 00:00:23.515700 | Executor: 521e9411259a 2026-02-28 00:00:23.515717 | Triggered by: https://github.com/osism/testbed 2026-02-28 00:00:23.515735 | Event ID: da5c57b108b34da5b60920ea2a4bd68a 2026-02-28 00:00:23.529291 | 2026-02-28 00:00:23.529383 | LOOP [emit-job-header : Print node information] 2026-02-28 00:00:23.727683 | orchestrator | ok: 2026-02-28 00:00:23.727888 | orchestrator | # Node Information 2026-02-28 00:00:23.727918 | orchestrator | Inventory Hostname: orchestrator 2026-02-28 00:00:23.727938 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-28 00:00:23.727956 | orchestrator | Username: zuul-testbed02 2026-02-28 00:00:23.727973 | orchestrator | Distro: Debian 12.13 2026-02-28 00:00:23.727992 | orchestrator | Provider: static-testbed 2026-02-28 00:00:23.728009 | orchestrator | Region: 2026-02-28 00:00:23.728026 | orchestrator | Label: testbed-orchestrator 2026-02-28 00:00:23.728042 | orchestrator | Product Name: OpenStack Nova 2026-02-28 00:00:23.728058 | orchestrator | Interface IP: 81.163.193.140 2026-02-28 00:00:23.747633 | 2026-02-28 00:00:23.747756 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-28 00:00:24.931332 | orchestrator -> localhost | changed 2026-02-28 00:00:24.938909 | 2026-02-28 00:00:24.939005 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-28 00:00:28.029434 | orchestrator -> localhost | changed 2026-02-28 00:00:28.068057 | 2026-02-28 00:00:28.068175 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-28 00:00:28.840621 | orchestrator -> localhost | ok 2026-02-28 00:00:28.846135 | 2026-02-28 00:00:28.846234 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-28 00:00:28.902649 | orchestrator | ok 2026-02-28 00:00:28.937572 | orchestrator | included: /var/lib/zuul/builds/3d98a54cd50b49fcb9eaae05856417e1/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-28 00:00:28.971528 | 2026-02-28 00:00:28.973690 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-28 00:00:35.294034 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-28 00:00:35.294238 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/3d98a54cd50b49fcb9eaae05856417e1/work/3d98a54cd50b49fcb9eaae05856417e1_id_rsa 2026-02-28 00:00:35.294271 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/3d98a54cd50b49fcb9eaae05856417e1/work/3d98a54cd50b49fcb9eaae05856417e1_id_rsa.pub 2026-02-28 00:00:35.294293 | orchestrator -> localhost | The key fingerprint is: 2026-02-28 00:00:35.294313 | orchestrator -> localhost | SHA256:hV7KyEY8ihaRjNxvWK88JDv4jmXPY+J4klQp7E2E5Dc zuul-build-sshkey 2026-02-28 00:00:35.294332 | orchestrator -> localhost | The key's randomart image is: 2026-02-28 00:00:35.294360 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-28 00:00:35.294378 | orchestrator -> localhost | |o++. | 2026-02-28 00:00:35.294396 | orchestrator -> localhost | |ooo+ o . | 2026-02-28 00:00:35.294413 | orchestrator -> localhost | |..oE= = . o | 2026-02-28 00:00:35.294430 | orchestrator -> localhost | | o.O.B * + | 2026-02-28 00:00:35.294446 | orchestrator -> localhost | |. O B = S | 2026-02-28 00:00:35.294464 | orchestrator -> localhost | | = + = | 2026-02-28 00:00:35.294480 | orchestrator -> localhost | |. oo. . | 2026-02-28 00:00:35.294496 | orchestrator -> localhost | | o=+oo | 2026-02-28 00:00:35.294513 | orchestrator -> localhost | | o=ooo. | 2026-02-28 00:00:35.294529 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-28 00:00:35.294568 | orchestrator -> localhost | ok: Runtime: 0:00:03.936041 2026-02-28 00:00:35.301054 | 2026-02-28 00:00:35.301134 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-28 00:00:35.341606 | orchestrator | ok 2026-02-28 00:00:35.356373 | orchestrator | included: /var/lib/zuul/builds/3d98a54cd50b49fcb9eaae05856417e1/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-28 00:00:35.373933 | 2026-02-28 00:00:35.374048 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-28 00:00:35.397431 | orchestrator | skipping: Conditional result was False 2026-02-28 00:00:35.406067 | 2026-02-28 00:00:35.406163 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-28 00:00:36.334490 | orchestrator | changed 2026-02-28 00:00:36.340812 | 2026-02-28 00:00:36.340900 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-28 00:00:36.706723 | orchestrator | ok 2026-02-28 00:00:36.711744 | 2026-02-28 00:00:36.711820 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-28 00:00:37.237556 | orchestrator | ok 2026-02-28 00:00:37.253562 | 2026-02-28 00:00:37.253669 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-28 00:00:37.808073 | orchestrator | ok 2026-02-28 00:00:37.812900 | 2026-02-28 00:00:37.812974 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-28 00:00:37.871496 | orchestrator | skipping: Conditional result was False 2026-02-28 00:00:37.877020 | 2026-02-28 00:00:37.877101 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-28 00:00:40.447410 | orchestrator -> localhost | changed 2026-02-28 00:00:40.463714 | 2026-02-28 00:00:40.463824 | TASK [add-build-sshkey : Add back temp key] 2026-02-28 00:00:42.002857 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/3d98a54cd50b49fcb9eaae05856417e1/work/3d98a54cd50b49fcb9eaae05856417e1_id_rsa (zuul-build-sshkey) 2026-02-28 00:00:42.003102 | orchestrator -> localhost | ok: Runtime: 0:00:00.065094 2026-02-28 00:00:42.010441 | 2026-02-28 00:00:42.010538 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-28 00:00:43.280079 | orchestrator | ok 2026-02-28 00:00:43.289227 | 2026-02-28 00:00:43.289335 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-28 00:00:43.387172 | orchestrator | skipping: Conditional result was False 2026-02-28 00:00:43.469157 | 2026-02-28 00:00:43.469267 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-28 00:00:44.087219 | orchestrator | ok 2026-02-28 00:00:44.132623 | 2026-02-28 00:00:44.132738 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-28 00:00:44.253371 | orchestrator | ok 2026-02-28 00:00:44.301460 | 2026-02-28 00:00:44.301562 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-28 00:00:45.722933 | orchestrator -> localhost | ok 2026-02-28 00:00:45.731829 | 2026-02-28 00:00:45.731928 | TASK [validate-host : Collect information about the host] 2026-02-28 00:00:48.171882 | orchestrator | ok 2026-02-28 00:00:48.224539 | 2026-02-28 00:00:48.224661 | TASK [validate-host : Sanitize hostname] 2026-02-28 00:00:48.357897 | orchestrator | ok 2026-02-28 00:00:48.366297 | 2026-02-28 00:00:48.366418 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-28 00:00:50.876604 | orchestrator -> localhost | changed 2026-02-28 00:00:50.881943 | 2026-02-28 00:00:50.882020 | TASK [validate-host : Collect information about zuul worker] 2026-02-28 00:00:51.707621 | orchestrator | ok 2026-02-28 00:00:51.712112 | 2026-02-28 00:00:51.712203 | TASK [validate-host : Write out all zuul information for each host] 2026-02-28 00:00:54.219983 | orchestrator -> localhost | changed 2026-02-28 00:00:54.228268 | 2026-02-28 00:00:54.228350 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-28 00:00:54.587870 | orchestrator | ok 2026-02-28 00:00:54.592627 | 2026-02-28 00:00:54.592702 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-28 00:02:18.406738 | orchestrator | changed: 2026-02-28 00:02:18.408570 | orchestrator | .d..t...... src/ 2026-02-28 00:02:18.408628 | orchestrator | .d..t...... src/github.com/ 2026-02-28 00:02:18.408669 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-28 00:02:18.408693 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-28 00:02:18.408714 | orchestrator | RedHat.yml 2026-02-28 00:02:18.422898 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-28 00:02:18.422916 | orchestrator | RedHat.yml 2026-02-28 00:02:18.422968 | orchestrator | = 1.53.0"... 2026-02-28 00:02:29.963901 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-28 00:02:30.110267 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-28 00:02:30.521742 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-28 00:02:30.592061 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-28 00:02:31.366662 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-28 00:02:31.433953 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-02-28 00:02:31.926329 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-02-28 00:02:31.926400 | orchestrator | 2026-02-28 00:02:31.926410 | orchestrator | Providers are signed by their developers. 2026-02-28 00:02:31.926417 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-28 00:02:31.926423 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-28 00:02:31.926440 | orchestrator | 2026-02-28 00:02:31.926447 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-28 00:02:31.926453 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-28 00:02:31.926470 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-28 00:02:31.926477 | orchestrator | you run "tofu init" in the future. 2026-02-28 00:02:31.926674 | orchestrator | 2026-02-28 00:02:31.926695 | orchestrator | OpenTofu has been successfully initialized! 2026-02-28 00:02:31.926703 | orchestrator | 2026-02-28 00:02:31.926709 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-28 00:02:31.926715 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-28 00:02:31.926721 | orchestrator | should now work. 2026-02-28 00:02:31.926732 | orchestrator | 2026-02-28 00:02:31.926762 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-28 00:02:31.926769 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-28 00:02:31.926775 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-28 00:02:32.112379 | orchestrator | Created and switched to workspace "ci"! 2026-02-28 00:02:32.112448 | orchestrator | 2026-02-28 00:02:32.112455 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-28 00:02:32.112461 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-28 00:02:32.112474 | orchestrator | for this configuration. 2026-02-28 00:02:32.278428 | orchestrator | ci.auto.tfvars 2026-02-28 00:02:32.286139 | orchestrator | default_custom.tf 2026-02-28 00:02:33.268552 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-28 00:02:33.826663 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-28 00:02:34.069964 | orchestrator | 2026-02-28 00:02:34.070064 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-28 00:02:34.070076 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-28 00:02:34.070081 | orchestrator | + create 2026-02-28 00:02:34.070096 | orchestrator | <= read (data resources) 2026-02-28 00:02:34.070101 | orchestrator | 2026-02-28 00:02:34.070106 | orchestrator | OpenTofu will perform the following actions: 2026-02-28 00:02:34.070110 | orchestrator | 2026-02-28 00:02:34.070114 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-28 00:02:34.070122 | orchestrator | # (config refers to values not yet known) 2026-02-28 00:02:34.070126 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-28 00:02:34.070131 | orchestrator | + checksum = (known after apply) 2026-02-28 00:02:34.070135 | orchestrator | + created_at = (known after apply) 2026-02-28 00:02:34.070139 | orchestrator | + file = (known after apply) 2026-02-28 00:02:34.070143 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.070165 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.070169 | orchestrator | + min_disk_gb = (known after apply) 2026-02-28 00:02:34.070174 | orchestrator | + min_ram_mb = (known after apply) 2026-02-28 00:02:34.070178 | orchestrator | + most_recent = true 2026-02-28 00:02:34.070182 | orchestrator | + name = (known after apply) 2026-02-28 00:02:34.070186 | orchestrator | + protected = (known after apply) 2026-02-28 00:02:34.070190 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.070197 | orchestrator | + schema = (known after apply) 2026-02-28 00:02:34.070201 | orchestrator | + size_bytes = (known after apply) 2026-02-28 00:02:34.070205 | orchestrator | + tags = (known after apply) 2026-02-28 00:02:34.070209 | orchestrator | + updated_at = (known after apply) 2026-02-28 00:02:34.070213 | orchestrator | } 2026-02-28 00:02:34.070219 | orchestrator | 2026-02-28 00:02:34.070223 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-28 00:02:34.070228 | orchestrator | # (config refers to values not yet known) 2026-02-28 00:02:34.070232 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-28 00:02:34.070236 | orchestrator | + checksum = (known after apply) 2026-02-28 00:02:34.070240 | orchestrator | + created_at = (known after apply) 2026-02-28 00:02:34.070244 | orchestrator | + file = (known after apply) 2026-02-28 00:02:34.070248 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.070252 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.070256 | orchestrator | + min_disk_gb = (known after apply) 2026-02-28 00:02:34.070259 | orchestrator | + min_ram_mb = (known after apply) 2026-02-28 00:02:34.070263 | orchestrator | + most_recent = true 2026-02-28 00:02:34.070267 | orchestrator | + name = (known after apply) 2026-02-28 00:02:34.070271 | orchestrator | + protected = (known after apply) 2026-02-28 00:02:34.070275 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.070279 | orchestrator | + schema = (known after apply) 2026-02-28 00:02:34.070283 | orchestrator | + size_bytes = (known after apply) 2026-02-28 00:02:34.070287 | orchestrator | + tags = (known after apply) 2026-02-28 00:02:34.070291 | orchestrator | + updated_at = (known after apply) 2026-02-28 00:02:34.070295 | orchestrator | } 2026-02-28 00:02:34.070300 | orchestrator | 2026-02-28 00:02:34.070304 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-28 00:02:34.070308 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-28 00:02:34.070312 | orchestrator | + content = (known after apply) 2026-02-28 00:02:34.070317 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-28 00:02:34.070321 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-28 00:02:34.070325 | orchestrator | + content_md5 = (known after apply) 2026-02-28 00:02:34.070329 | orchestrator | + content_sha1 = (known after apply) 2026-02-28 00:02:34.070332 | orchestrator | + content_sha256 = (known after apply) 2026-02-28 00:02:34.070336 | orchestrator | + content_sha512 = (known after apply) 2026-02-28 00:02:34.070340 | orchestrator | + directory_permission = "0777" 2026-02-28 00:02:34.070344 | orchestrator | + file_permission = "0644" 2026-02-28 00:02:34.070348 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-28 00:02:34.070352 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.070356 | orchestrator | } 2026-02-28 00:02:34.070360 | orchestrator | 2026-02-28 00:02:34.070364 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-28 00:02:34.070368 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-28 00:02:34.070372 | orchestrator | + content = (known after apply) 2026-02-28 00:02:34.070376 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-28 00:02:34.070380 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-28 00:02:34.070384 | orchestrator | + content_md5 = (known after apply) 2026-02-28 00:02:34.070387 | orchestrator | + content_sha1 = (known after apply) 2026-02-28 00:02:34.070391 | orchestrator | + content_sha256 = (known after apply) 2026-02-28 00:02:34.070395 | orchestrator | + content_sha512 = (known after apply) 2026-02-28 00:02:34.070399 | orchestrator | + directory_permission = "0777" 2026-02-28 00:02:34.070403 | orchestrator | + file_permission = "0644" 2026-02-28 00:02:34.070411 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-28 00:02:34.070415 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.070419 | orchestrator | } 2026-02-28 00:02:34.070424 | orchestrator | 2026-02-28 00:02:34.070434 | orchestrator | # local_file.inventory will be created 2026-02-28 00:02:34.070439 | orchestrator | + resource "local_file" "inventory" { 2026-02-28 00:02:34.070442 | orchestrator | + content = (known after apply) 2026-02-28 00:02:34.070446 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-28 00:02:34.070450 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-28 00:02:34.070454 | orchestrator | + content_md5 = (known after apply) 2026-02-28 00:02:34.070458 | orchestrator | + content_sha1 = (known after apply) 2026-02-28 00:02:34.070462 | orchestrator | + content_sha256 = (known after apply) 2026-02-28 00:02:34.070466 | orchestrator | + content_sha512 = (known after apply) 2026-02-28 00:02:34.070470 | orchestrator | + directory_permission = "0777" 2026-02-28 00:02:34.070474 | orchestrator | + file_permission = "0644" 2026-02-28 00:02:34.070478 | orchestrator | + filename = "inventory.ci" 2026-02-28 00:02:34.070482 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.070486 | orchestrator | } 2026-02-28 00:02:34.070490 | orchestrator | 2026-02-28 00:02:34.070494 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-28 00:02:34.070498 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-28 00:02:34.070502 | orchestrator | + content = (sensitive value) 2026-02-28 00:02:34.070506 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-28 00:02:34.070509 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-28 00:02:34.070513 | orchestrator | + content_md5 = (known after apply) 2026-02-28 00:02:34.070517 | orchestrator | + content_sha1 = (known after apply) 2026-02-28 00:02:34.070521 | orchestrator | + content_sha256 = (known after apply) 2026-02-28 00:02:34.070525 | orchestrator | + content_sha512 = (known after apply) 2026-02-28 00:02:34.070529 | orchestrator | + directory_permission = "0700" 2026-02-28 00:02:34.070533 | orchestrator | + file_permission = "0600" 2026-02-28 00:02:34.070537 | orchestrator | + filename = ".id_rsa.ci" 2026-02-28 00:02:34.070541 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.070545 | orchestrator | } 2026-02-28 00:02:34.070549 | orchestrator | 2026-02-28 00:02:34.070552 | orchestrator | # null_resource.node_semaphore will be created 2026-02-28 00:02:34.070556 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-28 00:02:34.070560 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.070564 | orchestrator | } 2026-02-28 00:02:34.070570 | orchestrator | 2026-02-28 00:02:34.070574 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-28 00:02:34.070579 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-28 00:02:34.070583 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.070586 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.070590 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.070594 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:34.070598 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.070603 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-28 00:02:34.070607 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.070610 | orchestrator | + size = 80 2026-02-28 00:02:34.070614 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.070618 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.070622 | orchestrator | } 2026-02-28 00:02:34.070626 | orchestrator | 2026-02-28 00:02:34.070630 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-28 00:02:34.070634 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:34.070638 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.070642 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.070646 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.070653 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:34.070657 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.070661 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-28 00:02:34.070665 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.070669 | orchestrator | + size = 80 2026-02-28 00:02:34.070673 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.070677 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.070681 | orchestrator | } 2026-02-28 00:02:34.070685 | orchestrator | 2026-02-28 00:02:34.070688 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-28 00:02:34.070692 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:34.070696 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.070700 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.070704 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.070708 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:34.070712 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.070716 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-28 00:02:34.070720 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.070724 | orchestrator | + size = 80 2026-02-28 00:02:34.070728 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.070732 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.070736 | orchestrator | } 2026-02-28 00:02:34.070741 | orchestrator | 2026-02-28 00:02:34.070745 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-28 00:02:34.070749 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:34.070753 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.070757 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.070761 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.070765 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:34.070769 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.070773 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-28 00:02:34.070777 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.070781 | orchestrator | + size = 80 2026-02-28 00:02:34.070797 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.070802 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.070806 | orchestrator | } 2026-02-28 00:02:34.070810 | orchestrator | 2026-02-28 00:02:34.070813 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-28 00:02:34.070817 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:34.070821 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.070825 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.070829 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.070833 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:34.070837 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.070843 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-28 00:02:34.070847 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.070851 | orchestrator | + size = 80 2026-02-28 00:02:34.070855 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.070859 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.070863 | orchestrator | } 2026-02-28 00:02:34.070867 | orchestrator | 2026-02-28 00:02:34.070871 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-28 00:02:34.070875 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:34.070879 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.070883 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.070887 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.070894 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:34.070941 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.070945 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-28 00:02:34.070949 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.070953 | orchestrator | + size = 80 2026-02-28 00:02:34.070957 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.070961 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.070965 | orchestrator | } 2026-02-28 00:02:34.070969 | orchestrator | 2026-02-28 00:02:34.070973 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-28 00:02:34.070977 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:34.070981 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.070985 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.070989 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.070993 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:34.070997 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.071001 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-28 00:02:34.071005 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.071008 | orchestrator | + size = 80 2026-02-28 00:02:34.071012 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.071016 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.071020 | orchestrator | } 2026-02-28 00:02:34.071026 | orchestrator | 2026-02-28 00:02:34.071031 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-28 00:02:34.071035 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:34.071039 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.071043 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.071047 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.071050 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.071054 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-28 00:02:34.071058 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.071062 | orchestrator | + size = 20 2026-02-28 00:02:34.071066 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.071070 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.071074 | orchestrator | } 2026-02-28 00:02:34.071078 | orchestrator | 2026-02-28 00:02:34.071082 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-28 00:02:34.071086 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:34.071090 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.071094 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.071098 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.071102 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.071106 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-28 00:02:34.071109 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.071113 | orchestrator | + size = 20 2026-02-28 00:02:34.071117 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.071121 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.071125 | orchestrator | } 2026-02-28 00:02:34.071129 | orchestrator | 2026-02-28 00:02:34.071133 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-28 00:02:34.071137 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:34.071141 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.071145 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.071149 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.071152 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.071156 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-28 00:02:34.071160 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.071168 | orchestrator | + size = 20 2026-02-28 00:02:34.071172 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.071176 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.071180 | orchestrator | } 2026-02-28 00:02:34.071184 | orchestrator | 2026-02-28 00:02:34.071187 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-28 00:02:34.071191 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:34.071195 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.071199 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.071203 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.071207 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.071211 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-28 00:02:34.071215 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.071219 | orchestrator | + size = 20 2026-02-28 00:02:34.071223 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.071226 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.071230 | orchestrator | } 2026-02-28 00:02:34.071234 | orchestrator | 2026-02-28 00:02:34.071238 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-28 00:02:34.071242 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:34.071246 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.071250 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.071254 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.071258 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.071262 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-28 00:02:34.071266 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.071272 | orchestrator | + size = 20 2026-02-28 00:02:34.071276 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.071280 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.071284 | orchestrator | } 2026-02-28 00:02:34.071288 | orchestrator | 2026-02-28 00:02:34.071292 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-28 00:02:34.071296 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:34.071300 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.071304 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.071308 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.071312 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.071316 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-28 00:02:34.071319 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.071323 | orchestrator | + size = 20 2026-02-28 00:02:34.071327 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.071331 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.071335 | orchestrator | } 2026-02-28 00:02:34.071341 | orchestrator | 2026-02-28 00:02:34.071345 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-28 00:02:34.071349 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:34.071353 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.071357 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.071361 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.071364 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.071368 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-28 00:02:34.071372 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.071376 | orchestrator | + size = 20 2026-02-28 00:02:34.071380 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.071384 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.071388 | orchestrator | } 2026-02-28 00:02:34.071392 | orchestrator | 2026-02-28 00:02:34.071396 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-28 00:02:34.071400 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:34.071411 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.071415 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.071419 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.071423 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.071427 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-28 00:02:34.071431 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.071435 | orchestrator | + size = 20 2026-02-28 00:02:34.071439 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.071443 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.071447 | orchestrator | } 2026-02-28 00:02:34.071451 | orchestrator | 2026-02-28 00:02:34.071454 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-28 00:02:34.071458 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:34.071462 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:34.071466 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.071470 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.071474 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:34.071478 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-28 00:02:34.071482 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.071486 | orchestrator | + size = 20 2026-02-28 00:02:34.071490 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:34.071493 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:34.071497 | orchestrator | } 2026-02-28 00:02:34.071501 | orchestrator | 2026-02-28 00:02:34.071505 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-28 00:02:34.071509 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-28 00:02:34.071513 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:34.071517 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:34.071521 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:34.071525 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.071529 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.071533 | orchestrator | + config_drive = true 2026-02-28 00:02:34.071537 | orchestrator | + created = (known after apply) 2026-02-28 00:02:34.071541 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:34.071544 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-28 00:02:34.071548 | orchestrator | + force_delete = false 2026-02-28 00:02:34.071552 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:34.071556 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.071560 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:34.071564 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:34.071568 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:34.071572 | orchestrator | + name = "testbed-manager" 2026-02-28 00:02:34.071575 | orchestrator | + power_state = "active" 2026-02-28 00:02:34.071579 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.071583 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:34.071587 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:34.071591 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:34.071595 | orchestrator | + user_data = (sensitive value) 2026-02-28 00:02:34.071599 | orchestrator | 2026-02-28 00:02:34.071603 | orchestrator | + block_device { 2026-02-28 00:02:34.071607 | orchestrator | + boot_index = 0 2026-02-28 00:02:34.071611 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:34.071617 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:34.071621 | orchestrator | + multiattach = false 2026-02-28 00:02:34.071625 | orchestrator | + source_type = "volume" 2026-02-28 00:02:34.071629 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:34.071636 | orchestrator | } 2026-02-28 00:02:34.071640 | orchestrator | 2026-02-28 00:02:34.071644 | orchestrator | + network { 2026-02-28 00:02:34.071648 | orchestrator | + access_network = false 2026-02-28 00:02:34.071652 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:34.071656 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:34.071660 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:34.071664 | orchestrator | + name = (known after apply) 2026-02-28 00:02:34.071668 | orchestrator | + port = (known after apply) 2026-02-28 00:02:34.071671 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:34.071675 | orchestrator | } 2026-02-28 00:02:34.071680 | orchestrator | } 2026-02-28 00:02:34.071685 | orchestrator | 2026-02-28 00:02:34.071689 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-28 00:02:34.071693 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:34.071697 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:34.071701 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:34.071705 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:34.071709 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.071713 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.071717 | orchestrator | + config_drive = true 2026-02-28 00:02:34.071721 | orchestrator | + created = (known after apply) 2026-02-28 00:02:34.071724 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:34.071728 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:34.071732 | orchestrator | + force_delete = false 2026-02-28 00:02:34.071736 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:34.071740 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.071744 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:34.071748 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:34.071752 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:34.071756 | orchestrator | + name = "testbed-node-0" 2026-02-28 00:02:34.071760 | orchestrator | + power_state = "active" 2026-02-28 00:02:34.071764 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.071767 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:34.071771 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:34.071775 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:34.071779 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:34.071783 | orchestrator | 2026-02-28 00:02:34.071813 | orchestrator | + block_device { 2026-02-28 00:02:34.071817 | orchestrator | + boot_index = 0 2026-02-28 00:02:34.071821 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:34.071825 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:34.071829 | orchestrator | + multiattach = false 2026-02-28 00:02:34.071833 | orchestrator | + source_type = "volume" 2026-02-28 00:02:34.071837 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:34.071841 | orchestrator | } 2026-02-28 00:02:34.071845 | orchestrator | 2026-02-28 00:02:34.071849 | orchestrator | + network { 2026-02-28 00:02:34.071853 | orchestrator | + access_network = false 2026-02-28 00:02:34.071856 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:34.071860 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:34.071864 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:34.071868 | orchestrator | + name = (known after apply) 2026-02-28 00:02:34.071872 | orchestrator | + port = (known after apply) 2026-02-28 00:02:34.071876 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:34.071880 | orchestrator | } 2026-02-28 00:02:34.071884 | orchestrator | } 2026-02-28 00:02:34.071888 | orchestrator | 2026-02-28 00:02:34.071892 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-28 00:02:34.071896 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:34.071900 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:34.071907 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:34.071911 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:34.071915 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.071919 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.071923 | orchestrator | + config_drive = true 2026-02-28 00:02:34.071927 | orchestrator | + created = (known after apply) 2026-02-28 00:02:34.071930 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:34.071934 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:34.071938 | orchestrator | + force_delete = false 2026-02-28 00:02:34.071942 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:34.071946 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.071950 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:34.071954 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:34.071958 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:34.071962 | orchestrator | + name = "testbed-node-1" 2026-02-28 00:02:34.071965 | orchestrator | + power_state = "active" 2026-02-28 00:02:34.071969 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.071973 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:34.071977 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:34.071981 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:34.071985 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:34.071989 | orchestrator | 2026-02-28 00:02:34.071993 | orchestrator | + block_device { 2026-02-28 00:02:34.071996 | orchestrator | + boot_index = 0 2026-02-28 00:02:34.072000 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:34.072004 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:34.072008 | orchestrator | + multiattach = false 2026-02-28 00:02:34.072012 | orchestrator | + source_type = "volume" 2026-02-28 00:02:34.072016 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:34.072020 | orchestrator | } 2026-02-28 00:02:34.072024 | orchestrator | 2026-02-28 00:02:34.072028 | orchestrator | + network { 2026-02-28 00:02:34.072032 | orchestrator | + access_network = false 2026-02-28 00:02:34.072035 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:34.072039 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:34.072043 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:34.072047 | orchestrator | + name = (known after apply) 2026-02-28 00:02:34.072051 | orchestrator | + port = (known after apply) 2026-02-28 00:02:34.072055 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:34.072059 | orchestrator | } 2026-02-28 00:02:34.072063 | orchestrator | } 2026-02-28 00:02:34.072069 | orchestrator | 2026-02-28 00:02:34.072073 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-28 00:02:34.072077 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:34.072081 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:34.072084 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:34.072089 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:34.072092 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.072099 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.072103 | orchestrator | + config_drive = true 2026-02-28 00:02:34.072107 | orchestrator | + created = (known after apply) 2026-02-28 00:02:34.072111 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:34.072115 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:34.072119 | orchestrator | + force_delete = false 2026-02-28 00:02:34.072123 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:34.072127 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.072131 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:34.072137 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:34.072141 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:34.072145 | orchestrator | + name = "testbed-node-2" 2026-02-28 00:02:34.072149 | orchestrator | + power_state = "active" 2026-02-28 00:02:34.072153 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.072157 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:34.072161 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:34.072165 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:34.072169 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:34.072173 | orchestrator | 2026-02-28 00:02:34.072176 | orchestrator | + block_device { 2026-02-28 00:02:34.072180 | orchestrator | + boot_index = 0 2026-02-28 00:02:34.072184 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:34.072188 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:34.072192 | orchestrator | + multiattach = false 2026-02-28 00:02:34.072196 | orchestrator | + source_type = "volume" 2026-02-28 00:02:34.072200 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:34.072204 | orchestrator | } 2026-02-28 00:02:34.072208 | orchestrator | 2026-02-28 00:02:34.072212 | orchestrator | + network { 2026-02-28 00:02:34.072215 | orchestrator | + access_network = false 2026-02-28 00:02:34.072219 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:34.072223 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:34.072227 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:34.072231 | orchestrator | + name = (known after apply) 2026-02-28 00:02:34.072235 | orchestrator | + port = (known after apply) 2026-02-28 00:02:34.072239 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:34.072243 | orchestrator | } 2026-02-28 00:02:34.072247 | orchestrator | } 2026-02-28 00:02:34.072252 | orchestrator | 2026-02-28 00:02:34.072256 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-28 00:02:34.072260 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:34.072264 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:34.072268 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:34.072272 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:34.072275 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.072279 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.072283 | orchestrator | + config_drive = true 2026-02-28 00:02:34.072287 | orchestrator | + created = (known after apply) 2026-02-28 00:02:34.072291 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:34.072295 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:34.072299 | orchestrator | + force_delete = false 2026-02-28 00:02:34.072302 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:34.072306 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.072310 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:34.072314 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:34.072318 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:34.072322 | orchestrator | + name = "testbed-node-3" 2026-02-28 00:02:34.072326 | orchestrator | + power_state = "active" 2026-02-28 00:02:34.072330 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.072333 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:34.072337 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:34.072341 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:34.072345 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:34.072349 | orchestrator | 2026-02-28 00:02:34.072353 | orchestrator | + block_device { 2026-02-28 00:02:34.072359 | orchestrator | + boot_index = 0 2026-02-28 00:02:34.072363 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:34.072367 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:34.072375 | orchestrator | + multiattach = false 2026-02-28 00:02:34.072379 | orchestrator | + source_type = "volume" 2026-02-28 00:02:34.072383 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:34.072387 | orchestrator | } 2026-02-28 00:02:34.072391 | orchestrator | 2026-02-28 00:02:34.072394 | orchestrator | + network { 2026-02-28 00:02:34.072398 | orchestrator | + access_network = false 2026-02-28 00:02:34.072402 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:34.072406 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:34.072410 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:34.072414 | orchestrator | + name = (known after apply) 2026-02-28 00:02:34.072418 | orchestrator | + port = (known after apply) 2026-02-28 00:02:34.072422 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:34.072425 | orchestrator | } 2026-02-28 00:02:34.072429 | orchestrator | } 2026-02-28 00:02:34.072435 | orchestrator | 2026-02-28 00:02:34.072439 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-28 00:02:34.072443 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:34.072447 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:34.072451 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:34.072455 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:34.072459 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.072463 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.072466 | orchestrator | + config_drive = true 2026-02-28 00:02:34.072470 | orchestrator | + created = (known after apply) 2026-02-28 00:02:34.072474 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:34.072478 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:34.072482 | orchestrator | + force_delete = false 2026-02-28 00:02:34.072486 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:34.072490 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.072494 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:34.072497 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:34.072501 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:34.072505 | orchestrator | + name = "testbed-node-4" 2026-02-28 00:02:34.072509 | orchestrator | + power_state = "active" 2026-02-28 00:02:34.072513 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.072517 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:34.072521 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:34.072525 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:34.072529 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:34.072532 | orchestrator | 2026-02-28 00:02:34.072536 | orchestrator | + block_device { 2026-02-28 00:02:34.072540 | orchestrator | + boot_index = 0 2026-02-28 00:02:34.072544 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:34.072548 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:34.072552 | orchestrator | + multiattach = false 2026-02-28 00:02:34.072556 | orchestrator | + source_type = "volume" 2026-02-28 00:02:34.072560 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:34.072564 | orchestrator | } 2026-02-28 00:02:34.072568 | orchestrator | 2026-02-28 00:02:34.072571 | orchestrator | + network { 2026-02-28 00:02:34.072575 | orchestrator | + access_network = false 2026-02-28 00:02:34.072579 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:34.072583 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:34.072587 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:34.072591 | orchestrator | + name = (known after apply) 2026-02-28 00:02:34.072595 | orchestrator | + port = (known after apply) 2026-02-28 00:02:34.072599 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:34.072603 | orchestrator | } 2026-02-28 00:02:34.072607 | orchestrator | } 2026-02-28 00:02:34.072616 | orchestrator | 2026-02-28 00:02:34.072620 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-28 00:02:34.072624 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:34.072628 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:34.072632 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:34.072636 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:34.072639 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.072643 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:34.072647 | orchestrator | + config_drive = true 2026-02-28 00:02:34.072651 | orchestrator | + created = (known after apply) 2026-02-28 00:02:34.072655 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:34.072659 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:34.072663 | orchestrator | + force_delete = false 2026-02-28 00:02:34.072669 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:34.072673 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.072677 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:34.072681 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:34.072685 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:34.072689 | orchestrator | + name = "testbed-node-5" 2026-02-28 00:02:34.072693 | orchestrator | + power_state = "active" 2026-02-28 00:02:34.072697 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.072701 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:34.072704 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:34.072708 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:34.072712 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:34.072716 | orchestrator | 2026-02-28 00:02:34.072720 | orchestrator | + block_device { 2026-02-28 00:02:34.072724 | orchestrator | + boot_index = 0 2026-02-28 00:02:34.072728 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:34.072732 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:34.072736 | orchestrator | + multiattach = false 2026-02-28 00:02:34.072740 | orchestrator | + source_type = "volume" 2026-02-28 00:02:34.072744 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:34.072747 | orchestrator | } 2026-02-28 00:02:34.072751 | orchestrator | 2026-02-28 00:02:34.072755 | orchestrator | + network { 2026-02-28 00:02:34.072759 | orchestrator | + access_network = false 2026-02-28 00:02:34.072763 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:34.072767 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:34.072771 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:34.072775 | orchestrator | + name = (known after apply) 2026-02-28 00:02:34.072779 | orchestrator | + port = (known after apply) 2026-02-28 00:02:34.072783 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:34.072810 | orchestrator | } 2026-02-28 00:02:34.072815 | orchestrator | } 2026-02-28 00:02:34.072818 | orchestrator | 2026-02-28 00:02:34.072822 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-28 00:02:34.072826 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-28 00:02:34.072830 | orchestrator | + fingerprint = (known after apply) 2026-02-28 00:02:34.072834 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.072838 | orchestrator | + name = "testbed" 2026-02-28 00:02:34.072842 | orchestrator | + private_key = (sensitive value) 2026-02-28 00:02:34.072846 | orchestrator | + public_key = (known after apply) 2026-02-28 00:02:34.072850 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.072853 | orchestrator | + user_id = (known after apply) 2026-02-28 00:02:34.072857 | orchestrator | } 2026-02-28 00:02:34.072861 | orchestrator | 2026-02-28 00:02:34.072865 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-28 00:02:34.072869 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:34.072877 | orchestrator | + device = (known after apply) 2026-02-28 00:02:34.072880 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.072884 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:34.072888 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.072892 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:34.072896 | orchestrator | } 2026-02-28 00:02:34.072900 | orchestrator | 2026-02-28 00:02:34.072904 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-28 00:02:34.072908 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:34.072912 | orchestrator | + device = (known after apply) 2026-02-28 00:02:34.072916 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.072920 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:34.072924 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.072927 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:34.072931 | orchestrator | } 2026-02-28 00:02:34.072935 | orchestrator | 2026-02-28 00:02:34.072939 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-28 00:02:34.072943 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:34.072947 | orchestrator | + device = (known after apply) 2026-02-28 00:02:34.072951 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.072955 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:34.072959 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.072963 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:34.072966 | orchestrator | } 2026-02-28 00:02:34.072970 | orchestrator | 2026-02-28 00:02:34.072974 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-28 00:02:34.072978 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:34.072982 | orchestrator | + device = (known after apply) 2026-02-28 00:02:34.072986 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.072990 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:34.072994 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.072998 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:34.073002 | orchestrator | } 2026-02-28 00:02:34.073009 | orchestrator | 2026-02-28 00:02:34.073013 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-28 00:02:34.073017 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:34.073021 | orchestrator | + device = (known after apply) 2026-02-28 00:02:34.073024 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.073028 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:34.073035 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.073039 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:34.073043 | orchestrator | } 2026-02-28 00:02:34.073047 | orchestrator | 2026-02-28 00:02:34.073051 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-28 00:02:34.073055 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:34.073059 | orchestrator | + device = (known after apply) 2026-02-28 00:02:34.073063 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.073066 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:34.073070 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.073074 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:34.073078 | orchestrator | } 2026-02-28 00:02:34.073082 | orchestrator | 2026-02-28 00:02:34.073086 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-28 00:02:34.073090 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:34.073094 | orchestrator | + device = (known after apply) 2026-02-28 00:02:34.073098 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.073102 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:34.073106 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.073113 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:34.073117 | orchestrator | } 2026-02-28 00:02:34.073121 | orchestrator | 2026-02-28 00:02:34.073125 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-28 00:02:34.073129 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:34.073132 | orchestrator | + device = (known after apply) 2026-02-28 00:02:34.073136 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.073140 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:34.073144 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.073148 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:34.073152 | orchestrator | } 2026-02-28 00:02:34.073156 | orchestrator | 2026-02-28 00:02:34.073160 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-28 00:02:34.073164 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:34.073168 | orchestrator | + device = (known after apply) 2026-02-28 00:02:34.073172 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.073176 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:34.073180 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.073183 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:34.073187 | orchestrator | } 2026-02-28 00:02:34.073191 | orchestrator | 2026-02-28 00:02:34.073195 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-28 00:02:34.073200 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-28 00:02:34.073203 | orchestrator | + fixed_ip = (known after apply) 2026-02-28 00:02:34.073207 | orchestrator | + floating_ip = (known after apply) 2026-02-28 00:02:34.073211 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.073215 | orchestrator | + port_id = (known after apply) 2026-02-28 00:02:34.073219 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.073223 | orchestrator | } 2026-02-28 00:02:34.073227 | orchestrator | 2026-02-28 00:02:34.073231 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-28 00:02:34.073235 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-28 00:02:34.073239 | orchestrator | + address = (known after apply) 2026-02-28 00:02:34.073243 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.073247 | orchestrator | + dns_domain = (known after apply) 2026-02-28 00:02:34.073251 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:34.073255 | orchestrator | + fixed_ip = (known after apply) 2026-02-28 00:02:34.073259 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.073262 | orchestrator | + pool = "public" 2026-02-28 00:02:34.073266 | orchestrator | + port_id = (known after apply) 2026-02-28 00:02:34.073270 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.073274 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:34.073278 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.073282 | orchestrator | } 2026-02-28 00:02:34.073286 | orchestrator | 2026-02-28 00:02:34.073290 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-28 00:02:34.073294 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-28 00:02:34.073298 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:34.073302 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.073306 | orchestrator | + availability_zone_hints = [ 2026-02-28 00:02:34.073310 | orchestrator | + "nova", 2026-02-28 00:02:34.073314 | orchestrator | ] 2026-02-28 00:02:34.073318 | orchestrator | + dns_domain = (known after apply) 2026-02-28 00:02:34.073322 | orchestrator | + external = (known after apply) 2026-02-28 00:02:34.073325 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.073329 | orchestrator | + mtu = (known after apply) 2026-02-28 00:02:34.073333 | orchestrator | + name = "net-testbed-management" 2026-02-28 00:02:34.073337 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:34.073344 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:34.073348 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.073352 | orchestrator | + shared = (known after apply) 2026-02-28 00:02:34.073356 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.073360 | orchestrator | + transparent_vlan = (known after apply) 2026-02-28 00:02:34.073364 | orchestrator | 2026-02-28 00:02:34.073368 | orchestrator | + segments (known after apply) 2026-02-28 00:02:34.073372 | orchestrator | } 2026-02-28 00:02:34.073376 | orchestrator | 2026-02-28 00:02:34.073380 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-28 00:02:34.073384 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-28 00:02:34.073388 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:34.073391 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:34.073399 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:34.073405 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.073409 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:34.073413 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:34.073417 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:34.073421 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:34.073425 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.073429 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:34.073433 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:34.073437 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:34.073441 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:34.073445 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.073449 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:34.073452 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.073456 | orchestrator | 2026-02-28 00:02:34.073460 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.073464 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:34.073468 | orchestrator | } 2026-02-28 00:02:34.073472 | orchestrator | 2026-02-28 00:02:34.073476 | orchestrator | + binding (known after apply) 2026-02-28 00:02:34.073480 | orchestrator | 2026-02-28 00:02:34.073484 | orchestrator | + fixed_ip { 2026-02-28 00:02:34.073488 | orchestrator | + ip_address = "192.168.16.5" 2026-02-28 00:02:34.073492 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:34.073496 | orchestrator | } 2026-02-28 00:02:34.073500 | orchestrator | } 2026-02-28 00:02:34.073504 | orchestrator | 2026-02-28 00:02:34.073508 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-28 00:02:34.073512 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:34.073516 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:34.073519 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:34.073523 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:34.073527 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.073531 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:34.073535 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:34.073539 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:34.073543 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:34.073547 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.073551 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:34.073555 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:34.073559 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:34.073562 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:34.073566 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.073574 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:34.073578 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.073582 | orchestrator | 2026-02-28 00:02:34.073586 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.073590 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:34.073594 | orchestrator | } 2026-02-28 00:02:34.073598 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.073602 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:34.073606 | orchestrator | } 2026-02-28 00:02:34.073610 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.073614 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:34.073617 | orchestrator | } 2026-02-28 00:02:34.073621 | orchestrator | 2026-02-28 00:02:34.073625 | orchestrator | + binding (known after apply) 2026-02-28 00:02:34.073629 | orchestrator | 2026-02-28 00:02:34.073633 | orchestrator | + fixed_ip { 2026-02-28 00:02:34.073637 | orchestrator | + ip_address = "192.168.16.10" 2026-02-28 00:02:34.073641 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:34.073645 | orchestrator | } 2026-02-28 00:02:34.073649 | orchestrator | } 2026-02-28 00:02:34.073653 | orchestrator | 2026-02-28 00:02:34.073657 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-28 00:02:34.073661 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:34.073665 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:34.073668 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:34.073672 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:34.073676 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.073680 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:34.073684 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:34.073688 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:34.073692 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:34.073696 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.073700 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:34.073704 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:34.073708 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:34.073712 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:34.073716 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.073719 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:34.073723 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.073727 | orchestrator | 2026-02-28 00:02:34.073731 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.073735 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:34.073739 | orchestrator | } 2026-02-28 00:02:34.073743 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.073747 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:34.073751 | orchestrator | } 2026-02-28 00:02:34.073755 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.073759 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:34.073763 | orchestrator | } 2026-02-28 00:02:34.073766 | orchestrator | 2026-02-28 00:02:34.073770 | orchestrator | + binding (known after apply) 2026-02-28 00:02:34.073774 | orchestrator | 2026-02-28 00:02:34.073778 | orchestrator | + fixed_ip { 2026-02-28 00:02:34.073782 | orchestrator | + ip_address = "192.168.16.11" 2026-02-28 00:02:34.073799 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:34.073803 | orchestrator | } 2026-02-28 00:02:34.073807 | orchestrator | } 2026-02-28 00:02:34.073811 | orchestrator | 2026-02-28 00:02:34.073815 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-28 00:02:34.073819 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:34.073823 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:34.073827 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:34.073831 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:34.073837 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.073844 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:34.073848 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:34.073852 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:34.073856 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:34.073863 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.073867 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:34.073871 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:34.073875 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:34.073878 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:34.073882 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.073886 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:34.073890 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.073894 | orchestrator | 2026-02-28 00:02:34.073898 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.073902 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:34.073906 | orchestrator | } 2026-02-28 00:02:34.073910 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.073914 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:34.073918 | orchestrator | } 2026-02-28 00:02:34.073922 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.073926 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:34.073929 | orchestrator | } 2026-02-28 00:02:34.073933 | orchestrator | 2026-02-28 00:02:34.073937 | orchestrator | + binding (known after apply) 2026-02-28 00:02:34.073941 | orchestrator | 2026-02-28 00:02:34.073945 | orchestrator | + fixed_ip { 2026-02-28 00:02:34.073949 | orchestrator | + ip_address = "192.168.16.12" 2026-02-28 00:02:34.073953 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:34.073957 | orchestrator | } 2026-02-28 00:02:34.073961 | orchestrator | } 2026-02-28 00:02:34.073965 | orchestrator | 2026-02-28 00:02:34.073969 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-28 00:02:34.073973 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:34.073976 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:34.073980 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:34.073984 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:34.073988 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.073992 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:34.073996 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:34.074000 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:34.074004 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:34.074008 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.074092 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:34.074098 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:34.074106 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:34.074110 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:34.074114 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.074118 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:34.074122 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.074126 | orchestrator | 2026-02-28 00:02:34.074130 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.074134 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:34.074138 | orchestrator | } 2026-02-28 00:02:34.074142 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.074146 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:34.074150 | orchestrator | } 2026-02-28 00:02:34.074154 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.074158 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:34.074162 | orchestrator | } 2026-02-28 00:02:34.074166 | orchestrator | 2026-02-28 00:02:34.074173 | orchestrator | + binding (known after apply) 2026-02-28 00:02:34.074177 | orchestrator | 2026-02-28 00:02:34.074181 | orchestrator | + fixed_ip { 2026-02-28 00:02:34.074185 | orchestrator | + ip_address = "192.168.16.13" 2026-02-28 00:02:34.074189 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:34.074193 | orchestrator | } 2026-02-28 00:02:34.074197 | orchestrator | } 2026-02-28 00:02:34.074200 | orchestrator | 2026-02-28 00:02:34.074204 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-28 00:02:34.074208 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:34.074212 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:34.074216 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:34.074220 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:34.074224 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.074228 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:34.074232 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:34.074236 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:34.074240 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:34.074244 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.074248 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:34.074251 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:34.074255 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:34.074259 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:34.074263 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.074267 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:34.074271 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.074275 | orchestrator | 2026-02-28 00:02:34.074279 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.074283 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:34.074287 | orchestrator | } 2026-02-28 00:02:34.074291 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.074295 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:34.074299 | orchestrator | } 2026-02-28 00:02:34.074303 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.074307 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:34.074311 | orchestrator | } 2026-02-28 00:02:34.074315 | orchestrator | 2026-02-28 00:02:34.074319 | orchestrator | + binding (known after apply) 2026-02-28 00:02:34.074323 | orchestrator | 2026-02-28 00:02:34.074327 | orchestrator | + fixed_ip { 2026-02-28 00:02:34.074331 | orchestrator | + ip_address = "192.168.16.14" 2026-02-28 00:02:34.074335 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:34.074338 | orchestrator | } 2026-02-28 00:02:34.074342 | orchestrator | } 2026-02-28 00:02:34.074346 | orchestrator | 2026-02-28 00:02:34.074350 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-28 00:02:34.074354 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:34.074362 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:34.074366 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:34.074370 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:34.074374 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.074378 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:34.074382 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:34.074386 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:34.074390 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:34.074394 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.074397 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:34.074401 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:34.074405 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:34.074409 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:34.074418 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.074422 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:34.074426 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.074430 | orchestrator | 2026-02-28 00:02:34.074434 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.074438 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:34.074442 | orchestrator | } 2026-02-28 00:02:34.074446 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.074450 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:34.074454 | orchestrator | } 2026-02-28 00:02:34.074458 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:34.074462 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:34.074466 | orchestrator | } 2026-02-28 00:02:34.074469 | orchestrator | 2026-02-28 00:02:34.074476 | orchestrator | + binding (known after apply) 2026-02-28 00:02:34.074480 | orchestrator | 2026-02-28 00:02:34.074484 | orchestrator | + fixed_ip { 2026-02-28 00:02:34.074488 | orchestrator | + ip_address = "192.168.16.15" 2026-02-28 00:02:34.074492 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:34.074496 | orchestrator | } 2026-02-28 00:02:34.074500 | orchestrator | } 2026-02-28 00:02:34.074504 | orchestrator | 2026-02-28 00:02:34.074508 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-28 00:02:34.074512 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-28 00:02:34.074516 | orchestrator | + force_destroy = false 2026-02-28 00:02:34.074519 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.074524 | orchestrator | + port_id = (known after apply) 2026-02-28 00:02:34.074528 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.074531 | orchestrator | + router_id = (known after apply) 2026-02-28 00:02:34.074535 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:34.074539 | orchestrator | } 2026-02-28 00:02:34.074543 | orchestrator | 2026-02-28 00:02:34.074547 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-28 00:02:34.074551 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-28 00:02:34.074555 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:34.074559 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.074563 | orchestrator | + availability_zone_hints = [ 2026-02-28 00:02:34.074567 | orchestrator | + "nova", 2026-02-28 00:02:34.074571 | orchestrator | ] 2026-02-28 00:02:34.074575 | orchestrator | + distributed = (known after apply) 2026-02-28 00:02:34.074579 | orchestrator | + enable_snat = (known after apply) 2026-02-28 00:02:34.074583 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-28 00:02:34.074587 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-28 00:02:34.074591 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.074595 | orchestrator | + name = "testbed" 2026-02-28 00:02:34.074599 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.074603 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.074607 | orchestrator | 2026-02-28 00:02:34.074611 | orchestrator | + external_fixed_ip (known after apply) 2026-02-28 00:02:34.074615 | orchestrator | } 2026-02-28 00:02:34.074619 | orchestrator | 2026-02-28 00:02:34.074623 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-28 00:02:34.074627 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-28 00:02:34.074631 | orchestrator | + description = "ssh" 2026-02-28 00:02:34.074635 | orchestrator | + direction = "ingress" 2026-02-28 00:02:34.074639 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:34.074643 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.074647 | orchestrator | + port_range_max = 22 2026-02-28 00:02:34.074650 | orchestrator | + port_range_min = 22 2026-02-28 00:02:34.074654 | orchestrator | + protocol = "tcp" 2026-02-28 00:02:34.074658 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.074665 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:34.074669 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:34.074673 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:34.074677 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:34.074681 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.074685 | orchestrator | } 2026-02-28 00:02:34.074689 | orchestrator | 2026-02-28 00:02:34.074693 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-28 00:02:34.074697 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-28 00:02:34.074701 | orchestrator | + description = "wireguard" 2026-02-28 00:02:34.074705 | orchestrator | + direction = "ingress" 2026-02-28 00:02:34.074709 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:34.074712 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.074716 | orchestrator | + port_range_max = 51820 2026-02-28 00:02:34.074720 | orchestrator | + port_range_min = 51820 2026-02-28 00:02:34.074724 | orchestrator | + protocol = "udp" 2026-02-28 00:02:34.074728 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.074732 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:34.074736 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:34.074740 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:34.074744 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:34.074748 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.074752 | orchestrator | } 2026-02-28 00:02:34.074756 | orchestrator | 2026-02-28 00:02:34.074762 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-28 00:02:34.074766 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-28 00:02:34.074770 | orchestrator | + direction = "ingress" 2026-02-28 00:02:34.074774 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:34.074778 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.074782 | orchestrator | + protocol = "tcp" 2026-02-28 00:02:34.074798 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.074802 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:34.074806 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:34.074810 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-28 00:02:34.074814 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:34.074818 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.074822 | orchestrator | } 2026-02-28 00:02:34.074826 | orchestrator | 2026-02-28 00:02:34.074830 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-28 00:02:34.074834 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-28 00:02:34.074838 | orchestrator | + direction = "ingress" 2026-02-28 00:02:34.074842 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:34.074846 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.074849 | orchestrator | + protocol = "udp" 2026-02-28 00:02:34.074853 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.074857 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:34.074861 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:34.074865 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-28 00:02:34.074869 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:34.074873 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.074877 | orchestrator | } 2026-02-28 00:02:34.074881 | orchestrator | 2026-02-28 00:02:34.074885 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-28 00:02:34.074892 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-28 00:02:34.074896 | orchestrator | + direction = "ingress" 2026-02-28 00:02:34.074900 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:34.074904 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.074908 | orchestrator | + protocol = "icmp" 2026-02-28 00:02:34.074912 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.074916 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:34.074920 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:34.074924 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:34.074928 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:34.074932 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.074936 | orchestrator | } 2026-02-28 00:02:34.074939 | orchestrator | 2026-02-28 00:02:34.074943 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-28 00:02:34.074947 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-28 00:02:34.074951 | orchestrator | + direction = "ingress" 2026-02-28 00:02:34.074955 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:34.074959 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.074963 | orchestrator | + protocol = "tcp" 2026-02-28 00:02:34.074967 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.074971 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:34.074977 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:34.074981 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:34.074985 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:34.074989 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.074993 | orchestrator | } 2026-02-28 00:02:34.074997 | orchestrator | 2026-02-28 00:02:34.075001 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-28 00:02:34.075005 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-28 00:02:34.075009 | orchestrator | + direction = "ingress" 2026-02-28 00:02:34.075013 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:34.075017 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.075021 | orchestrator | + protocol = "udp" 2026-02-28 00:02:34.075025 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.075028 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:34.075032 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:34.075036 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:34.075040 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:34.075044 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.075048 | orchestrator | } 2026-02-28 00:02:34.075052 | orchestrator | 2026-02-28 00:02:34.075056 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-28 00:02:34.075060 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-28 00:02:34.075064 | orchestrator | + direction = "ingress" 2026-02-28 00:02:34.075070 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:34.075074 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.075078 | orchestrator | + protocol = "icmp" 2026-02-28 00:02:34.075082 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.075086 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:34.075090 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:34.075094 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:34.075097 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:34.075101 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.075108 | orchestrator | } 2026-02-28 00:02:34.075112 | orchestrator | 2026-02-28 00:02:34.075116 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-28 00:02:34.075125 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-28 00:02:34.075129 | orchestrator | + description = "vrrp" 2026-02-28 00:02:34.075133 | orchestrator | + direction = "ingress" 2026-02-28 00:02:34.075137 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:34.075141 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.075145 | orchestrator | + protocol = "112" 2026-02-28 00:02:34.075149 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.075153 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:34.075157 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:34.075161 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:34.075165 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:34.075169 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.075173 | orchestrator | } 2026-02-28 00:02:34.075177 | orchestrator | 2026-02-28 00:02:34.075181 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-28 00:02:34.075185 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-28 00:02:34.075189 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.075193 | orchestrator | + description = "management security group" 2026-02-28 00:02:34.075197 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.075201 | orchestrator | + name = "testbed-management" 2026-02-28 00:02:34.075205 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.075209 | orchestrator | + stateful = (known after apply) 2026-02-28 00:02:34.075212 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.075216 | orchestrator | } 2026-02-28 00:02:34.075220 | orchestrator | 2026-02-28 00:02:34.075224 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-28 00:02:34.075228 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-28 00:02:34.075232 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.075236 | orchestrator | + description = "node security group" 2026-02-28 00:02:34.075240 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.075244 | orchestrator | + name = "testbed-node" 2026-02-28 00:02:34.075248 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.075251 | orchestrator | + stateful = (known after apply) 2026-02-28 00:02:34.075255 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.075259 | orchestrator | } 2026-02-28 00:02:34.075263 | orchestrator | 2026-02-28 00:02:34.075267 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-28 00:02:34.075271 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-28 00:02:34.075275 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:34.075279 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-28 00:02:34.075283 | orchestrator | + dns_nameservers = [ 2026-02-28 00:02:34.075287 | orchestrator | + "8.8.8.8", 2026-02-28 00:02:34.075291 | orchestrator | + "9.9.9.9", 2026-02-28 00:02:34.075295 | orchestrator | ] 2026-02-28 00:02:34.075299 | orchestrator | + enable_dhcp = true 2026-02-28 00:02:34.075303 | orchestrator | + gateway_ip = (known after apply) 2026-02-28 00:02:34.075307 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.075311 | orchestrator | + ip_version = 4 2026-02-28 00:02:34.075315 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-28 00:02:34.075319 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-28 00:02:34.075323 | orchestrator | + name = "subnet-testbed-management" 2026-02-28 00:02:34.075327 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:34.075331 | orchestrator | + no_gateway = false 2026-02-28 00:02:34.075335 | orchestrator | + region = (known after apply) 2026-02-28 00:02:34.075338 | orchestrator | + service_types = (known after apply) 2026-02-28 00:02:34.075346 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:34.075350 | orchestrator | 2026-02-28 00:02:34.075354 | orchestrator | + allocation_pool { 2026-02-28 00:02:34.075358 | orchestrator | + end = "192.168.31.250" 2026-02-28 00:02:34.075362 | orchestrator | + start = "192.168.31.200" 2026-02-28 00:02:34.075366 | orchestrator | } 2026-02-28 00:02:34.075370 | orchestrator | } 2026-02-28 00:02:34.075374 | orchestrator | 2026-02-28 00:02:34.075378 | orchestrator | # terraform_data.image will be created 2026-02-28 00:02:34.075382 | orchestrator | + resource "terraform_data" "image" { 2026-02-28 00:02:34.075386 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.075390 | orchestrator | + input = "Ubuntu 24.04" 2026-02-28 00:02:34.075393 | orchestrator | + output = (known after apply) 2026-02-28 00:02:34.075397 | orchestrator | } 2026-02-28 00:02:34.075401 | orchestrator | 2026-02-28 00:02:34.075405 | orchestrator | # terraform_data.image_node will be created 2026-02-28 00:02:34.075409 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-28 00:02:34.075413 | orchestrator | + id = (known after apply) 2026-02-28 00:02:34.075417 | orchestrator | + input = "Ubuntu 24.04" 2026-02-28 00:02:34.075421 | orchestrator | + output = (known after apply) 2026-02-28 00:02:34.075425 | orchestrator | } 2026-02-28 00:02:34.075429 | orchestrator | 2026-02-28 00:02:34.075433 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-28 00:02:34.075437 | orchestrator | 2026-02-28 00:02:34.075441 | orchestrator | Changes to Outputs: 2026-02-28 00:02:34.075445 | orchestrator | + manager_address = (sensitive value) 2026-02-28 00:02:34.075449 | orchestrator | + private_key = (sensitive value) 2026-02-28 00:02:34.339226 | orchestrator | terraform_data.image_node: Creating... 2026-02-28 00:02:34.339331 | orchestrator | terraform_data.image: Creating... 2026-02-28 00:02:34.339513 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=565249d1-a64c-2cba-d5c7-5772443872be] 2026-02-28 00:02:34.339844 | orchestrator | terraform_data.image: Creation complete after 0s [id=246be51d-9e42-8c42-ec80-3533c83c54c4] 2026-02-28 00:02:34.353346 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-28 00:02:34.366256 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-28 00:02:34.372977 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-28 00:02:34.373963 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-28 00:02:34.374058 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-28 00:02:34.375106 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-28 00:02:34.376873 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-28 00:02:34.380876 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-28 00:02:34.381417 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-28 00:02:34.383621 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-28 00:02:34.893888 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-28 00:02:36.188116 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-28 00:02:36.188190 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-28 00:02:36.188205 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-28 00:02:36.188218 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-02-28 00:02:36.188230 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-28 00:02:36.188242 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=7d170891-630f-4327-b04a-0996bdcb0881] 2026-02-28 00:02:36.188254 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-28 00:02:38.085695 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=794d6bd6-cdc9-465f-9345-dcdc45cdec57] 2026-02-28 00:02:38.090278 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=0e1c50ba-f800-4f3f-b273-e42be7614723] 2026-02-28 00:02:38.091926 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-28 00:02:38.095272 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-28 00:02:38.118068 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=e9cea570-02f9-4492-a688-e95ec43126f4] 2026-02-28 00:02:38.126874 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-28 00:02:38.137592 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=7a83ed65-2ee8-47d4-9c51-9fbd7e5801de] 2026-02-28 00:02:38.141434 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-28 00:02:38.149764 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=dc966f00-bd76-481b-987a-91131c9d0b5a] 2026-02-28 00:02:38.155952 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-28 00:02:38.195400 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=35a5842a-f5e3-41cd-9ad4-9887af65562b] 2026-02-28 00:02:38.199118 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=72888dc0-89fa-4d82-a9e9-f7d921f86abf] 2026-02-28 00:02:38.205900 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=5a576e70-544d-44fd-a16d-0d3a23dfbf81] 2026-02-28 00:02:38.219286 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-28 00:02:38.219380 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-28 00:02:38.219394 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-28 00:02:38.222742 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=83d93ad25b2922b6a00dd8fd4a55bfea4f2767d7] 2026-02-28 00:02:38.226349 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=ad7d5d5f217039c7890da5b1ee443a8ef5973d44] 2026-02-28 00:02:38.228963 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-28 00:02:38.250704 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=2e8d751a-8ce2-4e55-9ca5-cbc1ced8bcd0] 2026-02-28 00:02:38.774781 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=fdf124ea-0529-4dc9-b27a-d5265c98bb36] 2026-02-28 00:02:40.575487 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 3s [id=e2044da8-d558-432c-8157-a4751543dbf9] 2026-02-28 00:02:40.587764 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-28 00:02:41.573670 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=b20a930b-70d1-42c7-a265-d4a23b5b0ea5] 2026-02-28 00:02:41.612328 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=9bcd01d8-dc60-46f2-8431-43e53714b811] 2026-02-28 00:02:41.642516 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=fad726d2-031e-4d2a-a9ae-f431162b566b] 2026-02-28 00:02:41.649657 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd] 2026-02-28 00:02:41.669135 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=c1f8a38c-6103-4a77-9722-35142b367f20] 2026-02-28 00:02:41.691971 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=8da55c0a-8efa-423b-91a0-c7c16194a0ee] 2026-02-28 00:02:46.215341 | orchestrator | openstack_networking_router_v2.router: Creation complete after 5s [id=b6fc9cca-24b4-4e47-a2a5-0f6e026b100d] 2026-02-28 00:02:46.221124 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-28 00:02:46.222577 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-28 00:02:46.222693 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-28 00:02:46.454610 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=709b9798-cc8a-4612-b102-b3554b4cf0ea] 2026-02-28 00:02:47.442329 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-28 00:02:47.442404 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-28 00:02:47.442419 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-28 00:02:47.442462 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-28 00:02:47.442474 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-28 00:02:47.442485 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-28 00:02:47.442496 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-28 00:02:47.442507 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-28 00:02:47.442519 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=2f5ab968-2384-4c00-a6ce-19f83aa99a6c] 2026-02-28 00:02:47.442531 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-28 00:02:47.442543 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=0d7c4a22-0bf1-4364-a224-17ae0f40da53] 2026-02-28 00:02:47.442556 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-28 00:02:47.442567 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=62523482-eb0e-43b3-b53b-d6ef12801201] 2026-02-28 00:02:47.442578 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-28 00:02:47.442589 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=8a87a8ec-1f9c-49ec-a73b-b742ee41ef42] 2026-02-28 00:02:47.442600 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-28 00:02:47.442611 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=3f4f4c49-d034-48cc-8518-b9aa45bbe456] 2026-02-28 00:02:47.442623 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-28 00:02:47.586847 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=e469a64e-8712-4606-974d-805d5a4d7b0e] 2026-02-28 00:02:47.593114 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-28 00:02:47.673737 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=04d8057d-cfcc-42b8-9d38-3739e58b0055] 2026-02-28 00:02:47.680052 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-28 00:02:47.703761 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=4dbd4aa4-7f48-465f-bec0-00b0b3d7353d] 2026-02-28 00:02:47.710283 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-28 00:02:47.819991 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=2b6889cd-e922-4aec-ae2e-9497babb701e] 2026-02-28 00:02:47.838516 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=6df801d7-8e8a-4386-8ddb-36e8122465db] 2026-02-28 00:02:48.107649 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=6663f9c6-e278-4230-a9d1-a342c6aee1a7] 2026-02-28 00:02:48.322775 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=2eab42f0-e1dd-42be-b0bd-4246986dacad] 2026-02-28 00:02:48.459684 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=0af980c5-9bd7-49c9-8f21-ea75d10fc613] 2026-02-28 00:02:48.609756 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=d7f37536-581d-44bf-9536-9692d539d4b4] 2026-02-28 00:02:48.870972 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=d39f11f8-fc59-45b3-ab02-bf38155d7d40] 2026-02-28 00:02:48.876944 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=ada7ed48-3965-42e8-9c94-e153cc0d64ab] 2026-02-28 00:02:48.899239 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-28 00:02:48.915588 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-28 00:02:48.918418 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-28 00:02:48.921176 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-28 00:02:48.936993 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-28 00:02:48.938697 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-28 00:02:48.942572 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-28 00:02:50.213138 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=5080fb71-8b03-440f-a632-86b245e82e23] 2026-02-28 00:02:50.523625 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 3s [id=e18d8067-7bdf-4718-bfaa-cb14c2309c40] 2026-02-28 00:02:51.023133 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=a206c9e8-8910-4da4-baa3-ca8fd21bf801] 2026-02-28 00:02:51.032978 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-28 00:02:51.036586 | orchestrator | local_file.inventory: Creating... 2026-02-28 00:02:51.041941 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-28 00:02:51.044958 | orchestrator | local_file.inventory: Creation complete after 0s [id=d3d4d3aa29cb866dc656cdebccf8d3d315067624] 2026-02-28 00:02:51.046737 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=787a6fc95c293065c74ca2583af77cfe073515b2] 2026-02-28 00:02:52.607538 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=a206c9e8-8910-4da4-baa3-ca8fd21bf801] 2026-02-28 00:02:58.920701 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-28 00:02:58.920857 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-28 00:02:58.921896 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-28 00:02:58.938133 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-28 00:02:58.944245 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-28 00:02:58.944345 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-28 00:03:08.926402 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-28 00:03:08.926536 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-28 00:03:08.926561 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-28 00:03:08.938812 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-28 00:03:08.944974 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-28 00:03:08.945090 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-28 00:03:18.935872 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-02-28 00:03:18.936016 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-28 00:03:18.936079 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-02-28 00:03:18.939107 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-02-28 00:03:18.945416 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-02-28 00:03:18.945686 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-28 00:03:19.607678 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=896d9916-57ea-48e1-80a1-891a44a85cb7] 2026-02-28 00:03:19.821118 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=e1068ad0-ca76-4a62-b387-4fee0636a105] 2026-02-28 00:03:19.829324 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=5ac0c489-2802-41b0-8659-c22d4160b55d] 2026-02-28 00:03:28.937379 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-02-28 00:03:28.937509 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-02-28 00:03:28.946729 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-02-28 00:03:29.852418 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=34458aa2-a560-4ea5-87ff-3df87d4b26cc] 2026-02-28 00:03:29.961407 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=457032d8-9942-41d9-8de5-16f57fe95879] 2026-02-28 00:03:29.997418 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=39284b81-b81e-4975-a090-11cfb5135d49] 2026-02-28 00:03:30.039739 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-28 00:03:30.046035 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=1302208690093368397] 2026-02-28 00:03:30.046584 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-28 00:03:30.046695 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-28 00:03:30.048542 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-28 00:03:30.048767 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-28 00:03:30.048911 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-28 00:03:30.049132 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-28 00:03:30.049163 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-28 00:03:30.056307 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-28 00:03:30.072432 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-28 00:03:30.087013 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-28 00:03:33.454244 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=34458aa2-a560-4ea5-87ff-3df87d4b26cc/dc966f00-bd76-481b-987a-91131c9d0b5a] 2026-02-28 00:03:33.485291 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=e1068ad0-ca76-4a62-b387-4fee0636a105/7a83ed65-2ee8-47d4-9c51-9fbd7e5801de] 2026-02-28 00:03:33.486190 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=5ac0c489-2802-41b0-8659-c22d4160b55d/794d6bd6-cdc9-465f-9345-dcdc45cdec57] 2026-02-28 00:03:33.515909 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=34458aa2-a560-4ea5-87ff-3df87d4b26cc/e9cea570-02f9-4492-a688-e95ec43126f4] 2026-02-28 00:03:33.528541 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=5ac0c489-2802-41b0-8659-c22d4160b55d/35a5842a-f5e3-41cd-9ad4-9887af65562b] 2026-02-28 00:03:33.548699 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=e1068ad0-ca76-4a62-b387-4fee0636a105/72888dc0-89fa-4d82-a9e9-f7d921f86abf] 2026-02-28 00:03:39.622245 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=34458aa2-a560-4ea5-87ff-3df87d4b26cc/0e1c50ba-f800-4f3f-b273-e42be7614723] 2026-02-28 00:03:39.643525 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=5ac0c489-2802-41b0-8659-c22d4160b55d/2e8d751a-8ce2-4e55-9ca5-cbc1ced8bcd0] 2026-02-28 00:03:39.670325 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=e1068ad0-ca76-4a62-b387-4fee0636a105/5a576e70-544d-44fd-a16d-0d3a23dfbf81] 2026-02-28 00:03:40.085407 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-28 00:03:50.094811 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-28 00:03:50.585120 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=16db418f-805f-4681-b13e-2caddca44d3a] 2026-02-28 00:03:53.347382 | orchestrator | 2026-02-28 00:03:53.347476 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-28 00:03:53.347540 | orchestrator | 2026-02-28 00:03:53.347575 | orchestrator | Outputs: 2026-02-28 00:03:53.347588 | orchestrator | 2026-02-28 00:03:53.347634 | orchestrator | manager_address = 2026-02-28 00:03:53.347649 | orchestrator | private_key = 2026-02-28 00:03:53.507010 | orchestrator | ok: Runtime: 0:01:23.672298 2026-02-28 00:03:53.546077 | 2026-02-28 00:03:53.546273 | TASK [Fetch manager address] 2026-02-28 00:03:54.037173 | orchestrator | ok 2026-02-28 00:03:54.050681 | 2026-02-28 00:03:54.050860 | TASK [Set manager_host address] 2026-02-28 00:03:54.130859 | orchestrator | ok 2026-02-28 00:03:54.139874 | 2026-02-28 00:03:54.140010 | LOOP [Update ansible collections] 2026-02-28 00:03:55.105855 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-28 00:03:55.106295 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-28 00:03:55.106365 | orchestrator | Starting galaxy collection install process 2026-02-28 00:03:55.106406 | orchestrator | Process install dependency map 2026-02-28 00:03:55.106443 | orchestrator | Starting collection install process 2026-02-28 00:03:55.106477 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2026-02-28 00:03:55.106517 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2026-02-28 00:03:55.106567 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-28 00:03:55.106653 | orchestrator | ok: Item: commons Runtime: 0:00:00.628395 2026-02-28 00:03:56.074506 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-28 00:03:56.074703 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-28 00:03:56.074754 | orchestrator | Starting galaxy collection install process 2026-02-28 00:03:56.074792 | orchestrator | Process install dependency map 2026-02-28 00:03:56.074830 | orchestrator | Starting collection install process 2026-02-28 00:03:56.074890 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2026-02-28 00:03:56.074925 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2026-02-28 00:03:56.074957 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-28 00:03:56.075009 | orchestrator | ok: Item: services Runtime: 0:00:00.683037 2026-02-28 00:03:56.103345 | 2026-02-28 00:03:56.103545 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-28 00:04:06.684328 | orchestrator | ok 2026-02-28 00:04:06.692678 | 2026-02-28 00:04:06.692767 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-28 00:05:06.734599 | orchestrator | ok 2026-02-28 00:05:06.745159 | 2026-02-28 00:05:06.745346 | TASK [Fetch manager ssh hostkey] 2026-02-28 00:05:08.322654 | orchestrator | Output suppressed because no_log was given 2026-02-28 00:05:08.336622 | 2026-02-28 00:05:08.336780 | TASK [Get ssh keypair from terraform environment] 2026-02-28 00:05:08.874424 | orchestrator | ok: Runtime: 0:00:00.010920 2026-02-28 00:05:08.890758 | 2026-02-28 00:05:08.890966 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-28 00:05:08.940152 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-28 00:05:08.951043 | 2026-02-28 00:05:08.951256 | TASK [Run manager part 0] 2026-02-28 00:05:09.860118 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-28 00:05:09.915850 | orchestrator | 2026-02-28 00:05:09.916000 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-28 00:05:09.916008 | orchestrator | 2026-02-28 00:05:09.916021 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-28 00:05:11.814451 | orchestrator | ok: [testbed-manager] 2026-02-28 00:05:11.815448 | orchestrator | 2026-02-28 00:05:11.815595 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-28 00:05:11.815632 | orchestrator | 2026-02-28 00:05:11.815664 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:05:13.761085 | orchestrator | ok: [testbed-manager] 2026-02-28 00:05:13.761283 | orchestrator | 2026-02-28 00:05:13.761305 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-28 00:05:14.473754 | orchestrator | ok: [testbed-manager] 2026-02-28 00:05:14.473804 | orchestrator | 2026-02-28 00:05:14.473812 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-28 00:05:14.518277 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:05:14.518333 | orchestrator | 2026-02-28 00:05:14.518346 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-28 00:05:14.556463 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:05:14.556528 | orchestrator | 2026-02-28 00:05:14.556541 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-28 00:05:14.599059 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:05:14.599110 | orchestrator | 2026-02-28 00:05:14.599117 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-28 00:05:14.638581 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:05:14.638718 | orchestrator | 2026-02-28 00:05:14.638728 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-28 00:05:14.677649 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:05:14.677701 | orchestrator | 2026-02-28 00:05:14.677709 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-28 00:05:14.710328 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:05:14.710390 | orchestrator | 2026-02-28 00:05:14.710402 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-28 00:05:14.740155 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:05:14.740214 | orchestrator | 2026-02-28 00:05:14.740222 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-28 00:05:15.543598 | orchestrator | changed: [testbed-manager] 2026-02-28 00:05:15.543682 | orchestrator | 2026-02-28 00:05:15.543697 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-28 00:08:18.439828 | orchestrator | changed: [testbed-manager] 2026-02-28 00:08:18.440045 | orchestrator | 2026-02-28 00:08:18.440072 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-28 00:10:57.027827 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:57.027926 | orchestrator | 2026-02-28 00:10:57.027942 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-28 00:11:22.369754 | orchestrator | changed: [testbed-manager] 2026-02-28 00:11:22.369841 | orchestrator | 2026-02-28 00:11:22.369857 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-28 00:11:31.875384 | orchestrator | changed: [testbed-manager] 2026-02-28 00:11:31.875479 | orchestrator | 2026-02-28 00:11:31.875498 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-28 00:11:31.919325 | orchestrator | ok: [testbed-manager] 2026-02-28 00:11:31.919435 | orchestrator | 2026-02-28 00:11:31.919463 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-28 00:11:32.767322 | orchestrator | ok: [testbed-manager] 2026-02-28 00:11:32.767431 | orchestrator | 2026-02-28 00:11:32.767458 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-28 00:11:33.538994 | orchestrator | changed: [testbed-manager] 2026-02-28 00:11:33.539084 | orchestrator | 2026-02-28 00:11:33.539102 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-28 00:11:40.062798 | orchestrator | changed: [testbed-manager] 2026-02-28 00:11:40.062907 | orchestrator | 2026-02-28 00:11:40.062973 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-28 00:11:46.131976 | orchestrator | changed: [testbed-manager] 2026-02-28 00:11:46.132071 | orchestrator | 2026-02-28 00:11:46.132089 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-28 00:11:50.648333 | orchestrator | changed: [testbed-manager] 2026-02-28 00:11:50.648423 | orchestrator | 2026-02-28 00:11:50.648440 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-28 00:11:52.440047 | orchestrator | changed: [testbed-manager] 2026-02-28 00:11:52.440132 | orchestrator | 2026-02-28 00:11:52.440147 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-28 00:11:53.586310 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-28 00:11:53.586360 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-28 00:11:53.586371 | orchestrator | 2026-02-28 00:11:53.586380 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-28 00:11:53.630742 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-28 00:11:53.630809 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-28 00:11:53.630822 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-28 00:11:53.630833 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-28 00:11:56.832981 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-28 00:11:56.833017 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-28 00:11:56.833021 | orchestrator | 2026-02-28 00:11:56.833026 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-28 00:11:57.437340 | orchestrator | changed: [testbed-manager] 2026-02-28 00:11:57.437403 | orchestrator | 2026-02-28 00:11:57.437412 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-28 00:12:19.232082 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-28 00:12:19.232136 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-28 00:12:19.232146 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-28 00:12:19.232152 | orchestrator | 2026-02-28 00:12:19.232159 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-28 00:12:21.652409 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-28 00:12:21.652506 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-28 00:12:21.652519 | orchestrator | 2026-02-28 00:12:21.652529 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-28 00:12:21.652538 | orchestrator | 2026-02-28 00:12:21.652547 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:12:23.119856 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:23.119999 | orchestrator | 2026-02-28 00:12:23.120028 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-28 00:12:23.172168 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:23.172262 | orchestrator | 2026-02-28 00:12:23.172279 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-28 00:12:23.248166 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:23.248222 | orchestrator | 2026-02-28 00:12:23.248229 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-28 00:12:24.076300 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:24.076390 | orchestrator | 2026-02-28 00:12:24.076407 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-28 00:12:24.854307 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:24.854389 | orchestrator | 2026-02-28 00:12:24.854406 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-28 00:12:26.308972 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-28 00:12:26.309051 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-28 00:12:26.309064 | orchestrator | 2026-02-28 00:12:26.309102 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-28 00:12:27.756587 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:27.756642 | orchestrator | 2026-02-28 00:12:27.756650 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-28 00:12:29.603507 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:12:29.603784 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-28 00:12:29.603805 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:12:29.603817 | orchestrator | 2026-02-28 00:12:29.603830 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-28 00:12:29.655375 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:29.655506 | orchestrator | 2026-02-28 00:12:29.655525 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-28 00:12:29.734602 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:29.734709 | orchestrator | 2026-02-28 00:12:29.734732 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-28 00:12:30.328808 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:30.328872 | orchestrator | 2026-02-28 00:12:30.328882 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-28 00:12:30.403951 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:30.404052 | orchestrator | 2026-02-28 00:12:30.404075 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-28 00:12:31.311473 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:12:31.311566 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:31.311582 | orchestrator | 2026-02-28 00:12:31.311594 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-28 00:12:31.347335 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:31.347710 | orchestrator | 2026-02-28 00:12:31.347732 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-28 00:12:31.389183 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:31.389224 | orchestrator | 2026-02-28 00:12:31.389234 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-28 00:12:31.435051 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:31.435130 | orchestrator | 2026-02-28 00:12:31.435146 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-28 00:12:31.528678 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:31.528734 | orchestrator | 2026-02-28 00:12:31.528741 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-28 00:12:32.289921 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:32.289957 | orchestrator | 2026-02-28 00:12:32.289963 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-28 00:12:32.289968 | orchestrator | 2026-02-28 00:12:32.289972 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:12:33.690870 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:33.690907 | orchestrator | 2026-02-28 00:12:33.690913 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-28 00:12:34.682621 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:34.682711 | orchestrator | 2026-02-28 00:12:34.682728 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:12:34.682741 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-28 00:12:34.682753 | orchestrator | 2026-02-28 00:12:35.252735 | orchestrator | ok: Runtime: 0:07:25.538667 2026-02-28 00:12:35.272178 | 2026-02-28 00:12:35.272402 | TASK [Point out that the log in on the manager is now possible] 2026-02-28 00:12:35.320944 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-28 00:12:35.333010 | 2026-02-28 00:12:35.333147 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-28 00:12:35.382822 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-28 00:12:35.392764 | 2026-02-28 00:12:35.392906 | TASK [Run manager part 1 + 2] 2026-02-28 00:12:36.261155 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-28 00:12:36.317832 | orchestrator | 2026-02-28 00:12:36.317915 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-28 00:12:36.317930 | orchestrator | 2026-02-28 00:12:36.317957 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:12:38.850003 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:38.850265 | orchestrator | 2026-02-28 00:12:38.850327 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-28 00:12:38.889707 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:38.889789 | orchestrator | 2026-02-28 00:12:38.889811 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-28 00:12:38.940639 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:38.940706 | orchestrator | 2026-02-28 00:12:38.940719 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-28 00:12:38.997268 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:38.997332 | orchestrator | 2026-02-28 00:12:38.997345 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-28 00:12:39.079970 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:39.080058 | orchestrator | 2026-02-28 00:12:39.080078 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-28 00:12:39.149490 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:39.149576 | orchestrator | 2026-02-28 00:12:39.149596 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-28 00:12:39.191390 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-28 00:12:39.191505 | orchestrator | 2026-02-28 00:12:39.191523 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-28 00:12:39.939072 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:39.939136 | orchestrator | 2026-02-28 00:12:39.939286 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-28 00:12:39.981499 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:39.981547 | orchestrator | 2026-02-28 00:12:39.981553 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-28 00:12:41.517251 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:41.517325 | orchestrator | 2026-02-28 00:12:41.517339 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-28 00:12:42.113582 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:42.113664 | orchestrator | 2026-02-28 00:12:42.113678 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-28 00:12:43.349948 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:43.350074 | orchestrator | 2026-02-28 00:12:43.350096 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-28 00:12:59.043237 | orchestrator | changed: [testbed-manager] 2026-02-28 00:12:59.043345 | orchestrator | 2026-02-28 00:12:59.043363 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-28 00:12:59.755901 | orchestrator | ok: [testbed-manager] 2026-02-28 00:12:59.755959 | orchestrator | 2026-02-28 00:12:59.755970 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-28 00:12:59.823512 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:12:59.823575 | orchestrator | 2026-02-28 00:12:59.823586 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-28 00:13:00.860787 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:00.860855 | orchestrator | 2026-02-28 00:13:00.860866 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-28 00:13:01.860053 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:01.860146 | orchestrator | 2026-02-28 00:13:01.860162 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-28 00:13:02.471320 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:02.471405 | orchestrator | 2026-02-28 00:13:02.471457 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-28 00:13:02.514260 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-28 00:13:02.514339 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-28 00:13:02.514348 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-28 00:13:02.514355 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-28 00:13:05.030873 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:05.031018 | orchestrator | 2026-02-28 00:13:05.031031 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-28 00:13:14.871203 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-28 00:13:14.871294 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-28 00:13:14.871311 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-28 00:13:14.871331 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-28 00:13:14.871359 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-28 00:13:14.871377 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-28 00:13:14.871395 | orchestrator | 2026-02-28 00:13:14.871448 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-28 00:13:15.954302 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:15.954346 | orchestrator | 2026-02-28 00:13:15.954355 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-28 00:13:15.998791 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:13:15.998874 | orchestrator | 2026-02-28 00:13:15.998890 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-28 00:13:19.229696 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:19.229745 | orchestrator | 2026-02-28 00:13:19.229756 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-28 00:13:19.274925 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:13:19.274964 | orchestrator | 2026-02-28 00:13:19.274970 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-28 00:15:08.772379 | orchestrator | changed: [testbed-manager] 2026-02-28 00:15:08.772489 | orchestrator | 2026-02-28 00:15:08.772544 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-28 00:15:09.969578 | orchestrator | ok: [testbed-manager] 2026-02-28 00:15:09.969664 | orchestrator | 2026-02-28 00:15:09.969681 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:15:09.969695 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-28 00:15:09.969707 | orchestrator | 2026-02-28 00:15:10.530195 | orchestrator | ok: Runtime: 0:02:34.406282 2026-02-28 00:15:10.545152 | 2026-02-28 00:15:10.545302 | TASK [Reboot manager] 2026-02-28 00:15:12.080101 | orchestrator | ok: Runtime: 0:00:00.982712 2026-02-28 00:15:12.097610 | 2026-02-28 00:15:12.097753 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-28 00:15:28.539314 | orchestrator | ok 2026-02-28 00:15:28.547971 | 2026-02-28 00:15:28.548107 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-28 00:16:28.601907 | orchestrator | ok 2026-02-28 00:16:28.611144 | 2026-02-28 00:16:28.611273 | TASK [Deploy manager + bootstrap nodes] 2026-02-28 00:16:31.219512 | orchestrator | 2026-02-28 00:16:31.219739 | orchestrator | # DEPLOY MANAGER 2026-02-28 00:16:31.219763 | orchestrator | 2026-02-28 00:16:31.219778 | orchestrator | + set -e 2026-02-28 00:16:31.219833 | orchestrator | + echo 2026-02-28 00:16:31.219848 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-28 00:16:31.219866 | orchestrator | + echo 2026-02-28 00:16:31.219916 | orchestrator | + cat /opt/manager-vars.sh 2026-02-28 00:16:31.224544 | orchestrator | export NUMBER_OF_NODES=6 2026-02-28 00:16:31.224627 | orchestrator | 2026-02-28 00:16:31.224641 | orchestrator | export CEPH_VERSION=reef 2026-02-28 00:16:31.224653 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-28 00:16:31.224665 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-28 00:16:31.224692 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-28 00:16:31.224702 | orchestrator | 2026-02-28 00:16:31.224718 | orchestrator | export ARA=false 2026-02-28 00:16:31.224728 | orchestrator | export DEPLOY_MODE=manager 2026-02-28 00:16:31.224744 | orchestrator | export TEMPEST=true 2026-02-28 00:16:31.224754 | orchestrator | export IS_ZUUL=true 2026-02-28 00:16:31.224764 | orchestrator | 2026-02-28 00:16:31.224805 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.181 2026-02-28 00:16:31.224817 | orchestrator | export EXTERNAL_API=false 2026-02-28 00:16:31.224827 | orchestrator | 2026-02-28 00:16:31.224836 | orchestrator | export IMAGE_USER=ubuntu 2026-02-28 00:16:31.224849 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-28 00:16:31.224872 | orchestrator | 2026-02-28 00:16:31.224882 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-28 00:16:31.224892 | orchestrator | 2026-02-28 00:16:31.224911 | orchestrator | + echo 2026-02-28 00:16:31.224923 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-28 00:16:31.225607 | orchestrator | ++ export INTERACTIVE=false 2026-02-28 00:16:31.225639 | orchestrator | ++ INTERACTIVE=false 2026-02-28 00:16:31.225658 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-28 00:16:31.225676 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-28 00:16:31.225692 | orchestrator | + source /opt/manager-vars.sh 2026-02-28 00:16:31.225710 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-28 00:16:31.225721 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-28 00:16:31.225732 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-28 00:16:31.225742 | orchestrator | ++ CEPH_VERSION=reef 2026-02-28 00:16:31.225753 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-28 00:16:31.225764 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-28 00:16:31.225774 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-28 00:16:31.225810 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-28 00:16:31.225821 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-28 00:16:31.225845 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-28 00:16:31.225856 | orchestrator | ++ export ARA=false 2026-02-28 00:16:31.225868 | orchestrator | ++ ARA=false 2026-02-28 00:16:31.225879 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-28 00:16:31.225889 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-28 00:16:31.225900 | orchestrator | ++ export TEMPEST=true 2026-02-28 00:16:31.225910 | orchestrator | ++ TEMPEST=true 2026-02-28 00:16:31.225921 | orchestrator | ++ export IS_ZUUL=true 2026-02-28 00:16:31.225932 | orchestrator | ++ IS_ZUUL=true 2026-02-28 00:16:31.225943 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.181 2026-02-28 00:16:31.225954 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.181 2026-02-28 00:16:31.225963 | orchestrator | ++ export EXTERNAL_API=false 2026-02-28 00:16:31.225973 | orchestrator | ++ EXTERNAL_API=false 2026-02-28 00:16:31.225982 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-28 00:16:31.225992 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-28 00:16:31.226002 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-28 00:16:31.226011 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-28 00:16:31.226065 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-28 00:16:31.226076 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-28 00:16:31.226094 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-28 00:16:31.276606 | orchestrator | + docker version 2026-02-28 00:16:31.393851 | orchestrator | Client: Docker Engine - Community 2026-02-28 00:16:31.393953 | orchestrator | Version: 27.5.1 2026-02-28 00:16:31.393967 | orchestrator | API version: 1.47 2026-02-28 00:16:31.393981 | orchestrator | Go version: go1.22.11 2026-02-28 00:16:31.393992 | orchestrator | Git commit: 9f9e405 2026-02-28 00:16:31.394003 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-28 00:16:31.394068 | orchestrator | OS/Arch: linux/amd64 2026-02-28 00:16:31.394082 | orchestrator | Context: default 2026-02-28 00:16:31.394093 | orchestrator | 2026-02-28 00:16:31.394105 | orchestrator | Server: Docker Engine - Community 2026-02-28 00:16:31.394116 | orchestrator | Engine: 2026-02-28 00:16:31.394127 | orchestrator | Version: 27.5.1 2026-02-28 00:16:31.394139 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-28 00:16:31.394177 | orchestrator | Go version: go1.22.11 2026-02-28 00:16:31.394189 | orchestrator | Git commit: 4c9b3b0 2026-02-28 00:16:31.394200 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-28 00:16:31.394211 | orchestrator | OS/Arch: linux/amd64 2026-02-28 00:16:31.394221 | orchestrator | Experimental: false 2026-02-28 00:16:31.394232 | orchestrator | containerd: 2026-02-28 00:16:31.394243 | orchestrator | Version: v2.2.1 2026-02-28 00:16:31.394254 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-28 00:16:31.394266 | orchestrator | runc: 2026-02-28 00:16:31.394277 | orchestrator | Version: 1.3.4 2026-02-28 00:16:31.394288 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-28 00:16:31.394298 | orchestrator | docker-init: 2026-02-28 00:16:31.394309 | orchestrator | Version: 0.19.0 2026-02-28 00:16:31.394321 | orchestrator | GitCommit: de40ad0 2026-02-28 00:16:31.396819 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-28 00:16:31.405141 | orchestrator | + set -e 2026-02-28 00:16:31.405221 | orchestrator | + source /opt/manager-vars.sh 2026-02-28 00:16:31.405229 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-28 00:16:31.405235 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-28 00:16:31.405240 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-28 00:16:31.405244 | orchestrator | ++ CEPH_VERSION=reef 2026-02-28 00:16:31.405248 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-28 00:16:31.405254 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-28 00:16:31.405258 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-28 00:16:31.405263 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-28 00:16:31.405267 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-28 00:16:31.405271 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-28 00:16:31.405275 | orchestrator | ++ export ARA=false 2026-02-28 00:16:31.405280 | orchestrator | ++ ARA=false 2026-02-28 00:16:31.405284 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-28 00:16:31.405288 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-28 00:16:31.405292 | orchestrator | ++ export TEMPEST=true 2026-02-28 00:16:31.405296 | orchestrator | ++ TEMPEST=true 2026-02-28 00:16:31.405300 | orchestrator | ++ export IS_ZUUL=true 2026-02-28 00:16:31.405304 | orchestrator | ++ IS_ZUUL=true 2026-02-28 00:16:31.405308 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.181 2026-02-28 00:16:31.405312 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.181 2026-02-28 00:16:31.405316 | orchestrator | ++ export EXTERNAL_API=false 2026-02-28 00:16:31.405320 | orchestrator | ++ EXTERNAL_API=false 2026-02-28 00:16:31.405324 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-28 00:16:31.405328 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-28 00:16:31.405332 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-28 00:16:31.405335 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-28 00:16:31.405339 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-28 00:16:31.405343 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-28 00:16:31.405347 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-28 00:16:31.405351 | orchestrator | ++ export INTERACTIVE=false 2026-02-28 00:16:31.405355 | orchestrator | ++ INTERACTIVE=false 2026-02-28 00:16:31.405359 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-28 00:16:31.405367 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-28 00:16:31.405371 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-28 00:16:31.405375 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-28 00:16:31.411342 | orchestrator | + set -e 2026-02-28 00:16:31.411388 | orchestrator | + VERSION=9.5.0 2026-02-28 00:16:31.411397 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-28 00:16:31.419664 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-28 00:16:31.419709 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-28 00:16:31.423723 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-28 00:16:31.428308 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-28 00:16:31.436842 | orchestrator | + set -e 2026-02-28 00:16:31.436907 | orchestrator | /opt/configuration ~ 2026-02-28 00:16:31.436920 | orchestrator | + pushd /opt/configuration 2026-02-28 00:16:31.436931 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-28 00:16:31.440242 | orchestrator | + source /opt/venv/bin/activate 2026-02-28 00:16:31.441700 | orchestrator | ++ deactivate nondestructive 2026-02-28 00:16:31.441738 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:16:31.441754 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:16:31.441825 | orchestrator | ++ hash -r 2026-02-28 00:16:31.441838 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:16:31.441849 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-28 00:16:31.441859 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-28 00:16:31.441870 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-28 00:16:31.441883 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-28 00:16:31.441894 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-28 00:16:31.441904 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-28 00:16:31.441915 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-28 00:16:31.441928 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-28 00:16:31.441940 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-28 00:16:31.441951 | orchestrator | ++ export PATH 2026-02-28 00:16:31.441962 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:16:31.441973 | orchestrator | ++ '[' -z '' ']' 2026-02-28 00:16:31.441984 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-28 00:16:31.441994 | orchestrator | ++ PS1='(venv) ' 2026-02-28 00:16:31.442005 | orchestrator | ++ export PS1 2026-02-28 00:16:31.442062 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-28 00:16:31.442077 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-28 00:16:31.442089 | orchestrator | ++ hash -r 2026-02-28 00:16:31.442100 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-28 00:16:32.642438 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-28 00:16:32.643486 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-28 00:16:32.644963 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-28 00:16:32.646373 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-28 00:16:32.647443 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-28 00:16:32.657603 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-28 00:16:32.658902 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-28 00:16:32.659883 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-28 00:16:32.661262 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-28 00:16:32.694399 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-28 00:16:32.695757 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-28 00:16:32.697120 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-28 00:16:32.698467 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-02-28 00:16:32.702455 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-28 00:16:32.915228 | orchestrator | ++ which gilt 2026-02-28 00:16:32.917732 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-28 00:16:32.917840 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-28 00:16:33.254212 | orchestrator | osism.cfg-generics: 2026-02-28 00:16:33.407389 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-28 00:16:33.407514 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-28 00:16:33.407630 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-28 00:16:33.407646 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-28 00:16:34.056238 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-28 00:16:34.763875 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-28 00:16:34.763957 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-28 00:16:34.763972 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-28 00:16:34.763986 | orchestrator | + deactivate 2026-02-28 00:16:34.763998 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-28 00:16:34.764009 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-28 00:16:34.764020 | orchestrator | + export PATH 2026-02-28 00:16:34.764032 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-28 00:16:34.764043 | orchestrator | + '[' -n '' ']' 2026-02-28 00:16:34.764055 | orchestrator | + hash -r 2026-02-28 00:16:34.764066 | orchestrator | + '[' -n '' ']' 2026-02-28 00:16:34.764077 | orchestrator | + unset VIRTUAL_ENV 2026-02-28 00:16:34.764087 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-28 00:16:34.764100 | orchestrator | ~ 2026-02-28 00:16:34.764112 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-28 00:16:34.764123 | orchestrator | + unset -f deactivate 2026-02-28 00:16:34.764135 | orchestrator | + popd 2026-02-28 00:16:34.764146 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-28 00:16:34.764157 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-28 00:16:34.764186 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-28 00:16:34.764197 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-28 00:16:34.764208 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-28 00:16:34.764219 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-28 00:16:34.764229 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-28 00:16:34.764240 | orchestrator | ++ semver 2024.2 2025.1 2026-02-28 00:16:34.764251 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-28 00:16:34.764261 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-28 00:16:34.764272 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-28 00:16:34.764283 | orchestrator | + source /opt/venv/bin/activate 2026-02-28 00:16:34.764293 | orchestrator | ++ deactivate nondestructive 2026-02-28 00:16:34.764304 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:16:34.764315 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:16:34.764325 | orchestrator | ++ hash -r 2026-02-28 00:16:34.764336 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:16:34.764346 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-28 00:16:34.764357 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-28 00:16:34.764367 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-28 00:16:34.764378 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-28 00:16:34.764389 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-28 00:16:34.764400 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-28 00:16:34.764411 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-28 00:16:34.764422 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-28 00:16:34.764456 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-28 00:16:34.764467 | orchestrator | ++ export PATH 2026-02-28 00:16:34.764478 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:16:34.764488 | orchestrator | ++ '[' -z '' ']' 2026-02-28 00:16:34.764499 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-28 00:16:34.764510 | orchestrator | ++ PS1='(venv) ' 2026-02-28 00:16:34.764520 | orchestrator | ++ export PS1 2026-02-28 00:16:34.764531 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-28 00:16:34.764542 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-28 00:16:34.764553 | orchestrator | ++ hash -r 2026-02-28 00:16:34.764564 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-28 00:16:35.982473 | orchestrator | 2026-02-28 00:16:35.982589 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-28 00:16:35.982606 | orchestrator | 2026-02-28 00:16:35.982624 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-28 00:16:36.591580 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:36.591683 | orchestrator | 2026-02-28 00:16:36.591698 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-28 00:16:37.610708 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:37.610893 | orchestrator | 2026-02-28 00:16:37.610914 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-28 00:16:37.610986 | orchestrator | 2026-02-28 00:16:37.611007 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:16:40.055077 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:40.055199 | orchestrator | 2026-02-28 00:16:40.055215 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-28 00:16:40.110930 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:40.111066 | orchestrator | 2026-02-28 00:16:40.111095 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-28 00:16:40.601665 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:40.601799 | orchestrator | 2026-02-28 00:16:40.601859 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-28 00:16:40.643452 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:16:40.643539 | orchestrator | 2026-02-28 00:16:40.643553 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-28 00:16:40.989941 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:40.990045 | orchestrator | 2026-02-28 00:16:40.990057 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-28 00:16:41.307018 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:41.307113 | orchestrator | 2026-02-28 00:16:41.307129 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-28 00:16:41.410233 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:16:41.410320 | orchestrator | 2026-02-28 00:16:41.410333 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-28 00:16:41.410344 | orchestrator | 2026-02-28 00:16:41.410355 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:16:43.143178 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:43.143283 | orchestrator | 2026-02-28 00:16:43.143301 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-28 00:16:43.252670 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-28 00:16:43.252765 | orchestrator | 2026-02-28 00:16:43.252779 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-28 00:16:43.313750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-28 00:16:43.313879 | orchestrator | 2026-02-28 00:16:43.313902 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-28 00:16:44.435176 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-28 00:16:44.435267 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-28 00:16:44.435280 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-28 00:16:44.435289 | orchestrator | 2026-02-28 00:16:44.435302 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-28 00:16:46.261893 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-28 00:16:46.261998 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-28 00:16:46.262013 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-28 00:16:46.262068 | orchestrator | 2026-02-28 00:16:46.262079 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-28 00:16:46.932783 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:16:46.932945 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:46.932964 | orchestrator | 2026-02-28 00:16:46.932977 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-28 00:16:47.595270 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:16:47.595368 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:47.595382 | orchestrator | 2026-02-28 00:16:47.595393 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-28 00:16:47.659577 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:16:47.659653 | orchestrator | 2026-02-28 00:16:47.659661 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-28 00:16:48.041005 | orchestrator | ok: [testbed-manager] 2026-02-28 00:16:48.041109 | orchestrator | 2026-02-28 00:16:48.041128 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-28 00:16:48.121992 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-28 00:16:48.122148 | orchestrator | 2026-02-28 00:16:48.122165 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-28 00:16:49.204895 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:49.205026 | orchestrator | 2026-02-28 00:16:49.205046 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-28 00:16:50.081388 | orchestrator | changed: [testbed-manager] 2026-02-28 00:16:50.081483 | orchestrator | 2026-02-28 00:16:50.081498 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-28 00:17:10.928578 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:10.928688 | orchestrator | 2026-02-28 00:17:10.928699 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-28 00:17:10.976968 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:17:10.977038 | orchestrator | 2026-02-28 00:17:10.977061 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-28 00:17:10.977067 | orchestrator | 2026-02-28 00:17:10.977072 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:17:12.774396 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:12.774504 | orchestrator | 2026-02-28 00:17:12.774521 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-28 00:17:12.888256 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-28 00:17:12.888356 | orchestrator | 2026-02-28 00:17:12.888373 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-28 00:17:12.947203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:17:12.947297 | orchestrator | 2026-02-28 00:17:12.947312 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-28 00:17:15.535503 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:15.535608 | orchestrator | 2026-02-28 00:17:15.535624 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-28 00:17:15.586387 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:15.586506 | orchestrator | 2026-02-28 00:17:15.586529 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-28 00:17:15.739439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-28 00:17:15.739566 | orchestrator | 2026-02-28 00:17:15.739589 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-28 00:17:18.619456 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-28 00:17:18.619549 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-28 00:17:18.619559 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-28 00:17:18.619567 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-28 00:17:18.619574 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-28 00:17:18.619582 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-28 00:17:18.619588 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-28 00:17:18.619595 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-28 00:17:18.619602 | orchestrator | 2026-02-28 00:17:18.619609 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-28 00:17:19.256832 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:19.256976 | orchestrator | 2026-02-28 00:17:19.256999 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-28 00:17:19.904685 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:19.904785 | orchestrator | 2026-02-28 00:17:19.904800 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-28 00:17:20.003141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-28 00:17:20.003246 | orchestrator | 2026-02-28 00:17:20.003262 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-28 00:17:21.265207 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-28 00:17:21.265309 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-28 00:17:21.265323 | orchestrator | 2026-02-28 00:17:21.265333 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-28 00:17:21.888475 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:21.888573 | orchestrator | 2026-02-28 00:17:21.888588 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-28 00:17:21.944408 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:17:21.944503 | orchestrator | 2026-02-28 00:17:21.944517 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-28 00:17:22.022539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-28 00:17:22.022648 | orchestrator | 2026-02-28 00:17:22.022671 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-28 00:17:22.708511 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:22.708611 | orchestrator | 2026-02-28 00:17:22.708628 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-28 00:17:22.778438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-28 00:17:22.778526 | orchestrator | 2026-02-28 00:17:22.778539 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-28 00:17:24.241456 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:17:24.241582 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:17:24.241606 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:24.241626 | orchestrator | 2026-02-28 00:17:24.241645 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-28 00:17:24.881009 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:24.881108 | orchestrator | 2026-02-28 00:17:24.881123 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-28 00:17:24.931862 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:17:24.931987 | orchestrator | 2026-02-28 00:17:24.932002 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-28 00:17:25.037602 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-28 00:17:25.037711 | orchestrator | 2026-02-28 00:17:25.037730 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-28 00:17:25.558178 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:25.558301 | orchestrator | 2026-02-28 00:17:25.558319 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-28 00:17:25.969470 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:25.969572 | orchestrator | 2026-02-28 00:17:25.969587 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-28 00:17:27.258402 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-28 00:17:27.258488 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-28 00:17:27.258499 | orchestrator | 2026-02-28 00:17:27.258509 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-28 00:17:27.917572 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:27.917696 | orchestrator | 2026-02-28 00:17:27.917725 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-28 00:17:28.298174 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:28.298276 | orchestrator | 2026-02-28 00:17:28.298292 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-28 00:17:28.660299 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:28.660399 | orchestrator | 2026-02-28 00:17:28.660416 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-28 00:17:28.699119 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:17:28.699231 | orchestrator | 2026-02-28 00:17:28.699249 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-28 00:17:28.780262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-28 00:17:28.780396 | orchestrator | 2026-02-28 00:17:28.780414 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-28 00:17:28.825230 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:28.825320 | orchestrator | 2026-02-28 00:17:28.825334 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-28 00:17:30.906373 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-28 00:17:30.906457 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-28 00:17:30.906465 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-28 00:17:30.906470 | orchestrator | 2026-02-28 00:17:30.906476 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-28 00:17:31.622704 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:31.622781 | orchestrator | 2026-02-28 00:17:31.622791 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-28 00:17:32.343498 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:32.343599 | orchestrator | 2026-02-28 00:17:32.343615 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-28 00:17:33.052637 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:33.052739 | orchestrator | 2026-02-28 00:17:33.052755 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-28 00:17:33.139361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-28 00:17:33.139458 | orchestrator | 2026-02-28 00:17:33.139475 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-28 00:17:33.196267 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:33.196361 | orchestrator | 2026-02-28 00:17:33.196377 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-28 00:17:33.891533 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-28 00:17:33.891668 | orchestrator | 2026-02-28 00:17:33.891700 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-28 00:17:33.970813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-28 00:17:33.970910 | orchestrator | 2026-02-28 00:17:33.970920 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-28 00:17:34.689533 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:34.689624 | orchestrator | 2026-02-28 00:17:34.689635 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-28 00:17:35.334661 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:35.334734 | orchestrator | 2026-02-28 00:17:35.334742 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-28 00:17:35.383359 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:17:35.383438 | orchestrator | 2026-02-28 00:17:35.383447 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-28 00:17:35.444358 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:35.444460 | orchestrator | 2026-02-28 00:17:35.444479 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-28 00:17:36.270451 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:36.270571 | orchestrator | 2026-02-28 00:17:36.270597 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-28 00:18:42.120796 | orchestrator | changed: [testbed-manager] 2026-02-28 00:18:42.120945 | orchestrator | 2026-02-28 00:18:42.120965 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-28 00:18:43.148516 | orchestrator | ok: [testbed-manager] 2026-02-28 00:18:43.148623 | orchestrator | 2026-02-28 00:18:43.148640 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-28 00:18:43.211063 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:18:43.211139 | orchestrator | 2026-02-28 00:18:43.211147 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-28 00:18:45.533429 | orchestrator | changed: [testbed-manager] 2026-02-28 00:18:45.533529 | orchestrator | 2026-02-28 00:18:45.533543 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-28 00:18:45.589253 | orchestrator | ok: [testbed-manager] 2026-02-28 00:18:45.589351 | orchestrator | 2026-02-28 00:18:45.589366 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-28 00:18:45.589378 | orchestrator | 2026-02-28 00:18:45.589390 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-28 00:18:45.742517 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:18:45.742614 | orchestrator | 2026-02-28 00:18:45.742628 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-28 00:19:45.809211 | orchestrator | Pausing for 60 seconds 2026-02-28 00:19:45.809364 | orchestrator | changed: [testbed-manager] 2026-02-28 00:19:45.809380 | orchestrator | 2026-02-28 00:19:45.809391 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-28 00:19:48.508424 | orchestrator | changed: [testbed-manager] 2026-02-28 00:19:48.508533 | orchestrator | 2026-02-28 00:19:48.508550 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-28 00:20:50.555817 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-28 00:20:50.555906 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-28 00:20:50.555935 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-28 00:20:50.555945 | orchestrator | changed: [testbed-manager] 2026-02-28 00:20:50.555955 | orchestrator | 2026-02-28 00:20:50.555964 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-28 00:21:01.482069 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:01.482147 | orchestrator | 2026-02-28 00:21:01.482156 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-28 00:21:01.565582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-28 00:21:01.565645 | orchestrator | 2026-02-28 00:21:01.565651 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-28 00:21:01.565657 | orchestrator | 2026-02-28 00:21:01.565661 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-28 00:21:01.607339 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:01.607422 | orchestrator | 2026-02-28 00:21:01.607437 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-28 00:21:01.696447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-28 00:21:01.696551 | orchestrator | 2026-02-28 00:21:01.696560 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-28 00:21:02.505872 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:02.505974 | orchestrator | 2026-02-28 00:21:02.505991 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-28 00:21:05.891090 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:05.891198 | orchestrator | 2026-02-28 00:21:05.891222 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-28 00:21:05.975130 | orchestrator | ok: [testbed-manager] => { 2026-02-28 00:21:05.975220 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-28 00:21:05.975236 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-28 00:21:05.975248 | orchestrator | "Checking running containers against expected versions...", 2026-02-28 00:21:05.975260 | orchestrator | "", 2026-02-28 00:21:05.975272 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-28 00:21:05.975284 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-28 00:21:05.975296 | orchestrator | " Enabled: true", 2026-02-28 00:21:05.975307 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-28 00:21:05.975318 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:05.975329 | orchestrator | "", 2026-02-28 00:21:05.975340 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-28 00:21:05.975376 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-28 00:21:05.975388 | orchestrator | " Enabled: true", 2026-02-28 00:21:05.975399 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-28 00:21:05.975410 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:05.975421 | orchestrator | "", 2026-02-28 00:21:05.975432 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-28 00:21:05.975442 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-28 00:21:05.975453 | orchestrator | " Enabled: true", 2026-02-28 00:21:05.975464 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-28 00:21:05.975525 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:05.975537 | orchestrator | "", 2026-02-28 00:21:05.975553 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-28 00:21:05.975572 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-28 00:21:05.975589 | orchestrator | " Enabled: true", 2026-02-28 00:21:05.975607 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-28 00:21:05.975624 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:05.975640 | orchestrator | "", 2026-02-28 00:21:05.975658 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-28 00:21:05.975677 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-28 00:21:05.975694 | orchestrator | " Enabled: true", 2026-02-28 00:21:05.975713 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-28 00:21:05.975731 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:05.975750 | orchestrator | "", 2026-02-28 00:21:05.975769 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-28 00:21:05.975785 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-28 00:21:05.975798 | orchestrator | " Enabled: true", 2026-02-28 00:21:05.975809 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-28 00:21:05.975820 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:05.975831 | orchestrator | "", 2026-02-28 00:21:05.975842 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-28 00:21:05.975852 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-28 00:21:05.975863 | orchestrator | " Enabled: true", 2026-02-28 00:21:05.975875 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-28 00:21:05.975886 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:05.975897 | orchestrator | "", 2026-02-28 00:21:05.975907 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-28 00:21:05.975918 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-28 00:21:05.975929 | orchestrator | " Enabled: true", 2026-02-28 00:21:05.975939 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-28 00:21:05.975950 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:05.975961 | orchestrator | "", 2026-02-28 00:21:05.975971 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-28 00:21:05.975982 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-28 00:21:05.975993 | orchestrator | " Enabled: true", 2026-02-28 00:21:05.976004 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-28 00:21:05.976014 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:05.976025 | orchestrator | "", 2026-02-28 00:21:05.976036 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-28 00:21:05.976047 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-28 00:21:05.976058 | orchestrator | " Enabled: true", 2026-02-28 00:21:05.976068 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-28 00:21:05.976079 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:05.976090 | orchestrator | "", 2026-02-28 00:21:05.976100 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-28 00:21:05.976123 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-28 00:21:05.976134 | orchestrator | " Enabled: true", 2026-02-28 00:21:05.976145 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-28 00:21:05.976156 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:05.976167 | orchestrator | "", 2026-02-28 00:21:05.976178 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-28 00:21:05.976188 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-28 00:21:05.976199 | orchestrator | " Enabled: true", 2026-02-28 00:21:05.976210 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-28 00:21:05.976221 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:05.976233 | orchestrator | "", 2026-02-28 00:21:05.976244 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-28 00:21:05.976254 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-28 00:21:05.976265 | orchestrator | " Enabled: true", 2026-02-28 00:21:05.976276 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-28 00:21:05.976287 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:05.976298 | orchestrator | "", 2026-02-28 00:21:05.976308 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-28 00:21:05.976319 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-28 00:21:05.976330 | orchestrator | " Enabled: true", 2026-02-28 00:21:05.976341 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-28 00:21:05.976371 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:05.976383 | orchestrator | "", 2026-02-28 00:21:05.976398 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-28 00:21:05.976417 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-28 00:21:05.976446 | orchestrator | " Enabled: true", 2026-02-28 00:21:05.976465 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-28 00:21:05.976508 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:05.976526 | orchestrator | "", 2026-02-28 00:21:05.976544 | orchestrator | "=== Summary ===", 2026-02-28 00:21:05.976563 | orchestrator | "Errors (version mismatches): 0", 2026-02-28 00:21:05.976582 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-28 00:21:05.976600 | orchestrator | "", 2026-02-28 00:21:05.976617 | orchestrator | "✅ All running containers match expected versions!" 2026-02-28 00:21:05.976629 | orchestrator | ] 2026-02-28 00:21:05.976640 | orchestrator | } 2026-02-28 00:21:05.976652 | orchestrator | 2026-02-28 00:21:05.976663 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-28 00:21:06.030442 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:06.030556 | orchestrator | 2026-02-28 00:21:06.030566 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:21:06.030574 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-28 00:21:06.030581 | orchestrator | 2026-02-28 00:21:06.137818 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-28 00:21:06.137922 | orchestrator | + deactivate 2026-02-28 00:21:06.137938 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-28 00:21:06.137951 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-28 00:21:06.137962 | orchestrator | + export PATH 2026-02-28 00:21:06.137974 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-28 00:21:06.137985 | orchestrator | + '[' -n '' ']' 2026-02-28 00:21:06.137996 | orchestrator | + hash -r 2026-02-28 00:21:06.138007 | orchestrator | + '[' -n '' ']' 2026-02-28 00:21:06.138073 | orchestrator | + unset VIRTUAL_ENV 2026-02-28 00:21:06.138085 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-28 00:21:06.138096 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-28 00:21:06.138108 | orchestrator | + unset -f deactivate 2026-02-28 00:21:06.138120 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-28 00:21:06.146827 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-28 00:21:06.146876 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-28 00:21:06.146919 | orchestrator | + local max_attempts=60 2026-02-28 00:21:06.146932 | orchestrator | + local name=ceph-ansible 2026-02-28 00:21:06.146943 | orchestrator | + local attempt_num=1 2026-02-28 00:21:06.147536 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:21:06.185149 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:21:06.185242 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-28 00:21:06.185258 | orchestrator | + local max_attempts=60 2026-02-28 00:21:06.185270 | orchestrator | + local name=kolla-ansible 2026-02-28 00:21:06.185282 | orchestrator | + local attempt_num=1 2026-02-28 00:21:06.185558 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-28 00:21:06.222081 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:21:06.222176 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-28 00:21:06.222203 | orchestrator | + local max_attempts=60 2026-02-28 00:21:06.222230 | orchestrator | + local name=osism-ansible 2026-02-28 00:21:06.222249 | orchestrator | + local attempt_num=1 2026-02-28 00:21:06.222586 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-28 00:21:06.253782 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:21:06.253885 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-28 00:21:06.253907 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-28 00:21:06.974790 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-28 00:21:07.141628 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-28 00:21:07.141723 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-28 00:21:07.141739 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-28 00:21:07.141751 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-28 00:21:07.141765 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-28 00:21:07.141800 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-28 00:21:07.141812 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-28 00:21:07.141827 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-28 00:21:07.141845 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-28 00:21:07.141864 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-28 00:21:07.141883 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-28 00:21:07.141900 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-28 00:21:07.141917 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-28 00:21:07.141964 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-28 00:21:07.141983 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-28 00:21:07.142002 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-28 00:21:07.147628 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-28 00:21:07.189300 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-28 00:21:07.189430 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-28 00:21:07.192442 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-28 00:21:19.462864 | orchestrator | 2026-02-28 00:21:19 | INFO  | Task 128b29c3-9c08-4ebe-8d7d-d97c4e244fa5 (resolvconf) was prepared for execution. 2026-02-28 00:21:19.462992 | orchestrator | 2026-02-28 00:21:19 | INFO  | It takes a moment until task 128b29c3-9c08-4ebe-8d7d-d97c4e244fa5 (resolvconf) has been started and output is visible here. 2026-02-28 00:21:33.703398 | orchestrator | 2026-02-28 00:21:33.703629 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-28 00:21:33.703662 | orchestrator | 2026-02-28 00:21:33.703682 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:21:33.703700 | orchestrator | Saturday 28 February 2026 00:21:23 +0000 (0:00:00.140) 0:00:00.140 ***** 2026-02-28 00:21:33.703719 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:33.703737 | orchestrator | 2026-02-28 00:21:33.703755 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-28 00:21:33.703774 | orchestrator | Saturday 28 February 2026 00:21:27 +0000 (0:00:03.947) 0:00:04.087 ***** 2026-02-28 00:21:33.703794 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:33.703814 | orchestrator | 2026-02-28 00:21:33.703832 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-28 00:21:33.703851 | orchestrator | Saturday 28 February 2026 00:21:27 +0000 (0:00:00.062) 0:00:04.150 ***** 2026-02-28 00:21:33.703863 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-28 00:21:33.703876 | orchestrator | 2026-02-28 00:21:33.703887 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-28 00:21:33.703898 | orchestrator | Saturday 28 February 2026 00:21:27 +0000 (0:00:00.088) 0:00:04.239 ***** 2026-02-28 00:21:33.703932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:21:33.703946 | orchestrator | 2026-02-28 00:21:33.703959 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-28 00:21:33.703971 | orchestrator | Saturday 28 February 2026 00:21:27 +0000 (0:00:00.073) 0:00:04.313 ***** 2026-02-28 00:21:33.703984 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:33.703996 | orchestrator | 2026-02-28 00:21:33.704008 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-28 00:21:33.704021 | orchestrator | Saturday 28 February 2026 00:21:28 +0000 (0:00:01.109) 0:00:05.422 ***** 2026-02-28 00:21:33.704033 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:33.704051 | orchestrator | 2026-02-28 00:21:33.704070 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-28 00:21:33.704090 | orchestrator | Saturday 28 February 2026 00:21:28 +0000 (0:00:00.064) 0:00:05.487 ***** 2026-02-28 00:21:33.704141 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:33.704164 | orchestrator | 2026-02-28 00:21:33.704184 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-28 00:21:33.704203 | orchestrator | Saturday 28 February 2026 00:21:29 +0000 (0:00:00.522) 0:00:06.010 ***** 2026-02-28 00:21:33.704217 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:33.704228 | orchestrator | 2026-02-28 00:21:33.704239 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-28 00:21:33.704251 | orchestrator | Saturday 28 February 2026 00:21:29 +0000 (0:00:00.069) 0:00:06.079 ***** 2026-02-28 00:21:33.704262 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:33.704273 | orchestrator | 2026-02-28 00:21:33.704283 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-28 00:21:33.704294 | orchestrator | Saturday 28 February 2026 00:21:30 +0000 (0:00:00.571) 0:00:06.651 ***** 2026-02-28 00:21:33.704305 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:33.704315 | orchestrator | 2026-02-28 00:21:33.704326 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-28 00:21:33.704337 | orchestrator | Saturday 28 February 2026 00:21:31 +0000 (0:00:01.089) 0:00:07.741 ***** 2026-02-28 00:21:33.704348 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:33.704359 | orchestrator | 2026-02-28 00:21:33.704369 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-28 00:21:33.704380 | orchestrator | Saturday 28 February 2026 00:21:32 +0000 (0:00:00.972) 0:00:08.713 ***** 2026-02-28 00:21:33.704391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-28 00:21:33.704402 | orchestrator | 2026-02-28 00:21:33.704412 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-28 00:21:33.704423 | orchestrator | Saturday 28 February 2026 00:21:32 +0000 (0:00:00.076) 0:00:08.790 ***** 2026-02-28 00:21:33.704434 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:33.704444 | orchestrator | 2026-02-28 00:21:33.704455 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:21:33.704467 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:21:33.704478 | orchestrator | 2026-02-28 00:21:33.704489 | orchestrator | 2026-02-28 00:21:33.704499 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:21:33.704510 | orchestrator | Saturday 28 February 2026 00:21:33 +0000 (0:00:01.237) 0:00:10.028 ***** 2026-02-28 00:21:33.704545 | orchestrator | =============================================================================== 2026-02-28 00:21:33.704556 | orchestrator | Gathering Facts --------------------------------------------------------- 3.95s 2026-02-28 00:21:33.704567 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.24s 2026-02-28 00:21:33.704577 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.11s 2026-02-28 00:21:33.704588 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.09s 2026-02-28 00:21:33.704599 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.97s 2026-02-28 00:21:33.704609 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2026-02-28 00:21:33.704644 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.52s 2026-02-28 00:21:33.704655 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-02-28 00:21:33.704666 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-02-28 00:21:33.704677 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-02-28 00:21:33.704687 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-02-28 00:21:33.704698 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-02-28 00:21:33.704718 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-02-28 00:21:34.008265 | orchestrator | + osism apply sshconfig 2026-02-28 00:21:46.030303 | orchestrator | 2026-02-28 00:21:46 | INFO  | Task 09514f40-079e-49d7-8ace-915db5492301 (sshconfig) was prepared for execution. 2026-02-28 00:21:46.030408 | orchestrator | 2026-02-28 00:21:46 | INFO  | It takes a moment until task 09514f40-079e-49d7-8ace-915db5492301 (sshconfig) has been started and output is visible here. 2026-02-28 00:21:58.109547 | orchestrator | 2026-02-28 00:21:58.109698 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-28 00:21:58.109717 | orchestrator | 2026-02-28 00:21:58.109742 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-28 00:21:58.109749 | orchestrator | Saturday 28 February 2026 00:21:50 +0000 (0:00:00.162) 0:00:00.162 ***** 2026-02-28 00:21:58.109756 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:58.109763 | orchestrator | 2026-02-28 00:21:58.109769 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-28 00:21:58.109776 | orchestrator | Saturday 28 February 2026 00:21:50 +0000 (0:00:00.581) 0:00:00.743 ***** 2026-02-28 00:21:58.109783 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:58.109790 | orchestrator | 2026-02-28 00:21:58.109796 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-28 00:21:58.109803 | orchestrator | Saturday 28 February 2026 00:21:51 +0000 (0:00:00.505) 0:00:01.249 ***** 2026-02-28 00:21:58.109809 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-28 00:21:58.109816 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-28 00:21:58.109822 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-28 00:21:58.109831 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-28 00:21:58.109840 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-28 00:21:58.109851 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-28 00:21:58.109862 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-28 00:21:58.109872 | orchestrator | 2026-02-28 00:21:58.109880 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-28 00:21:58.109886 | orchestrator | Saturday 28 February 2026 00:21:57 +0000 (0:00:05.806) 0:00:07.056 ***** 2026-02-28 00:21:58.109892 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:58.109898 | orchestrator | 2026-02-28 00:21:58.109905 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-28 00:21:58.109911 | orchestrator | Saturday 28 February 2026 00:21:57 +0000 (0:00:00.087) 0:00:07.143 ***** 2026-02-28 00:21:58.109917 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:58.109924 | orchestrator | 2026-02-28 00:21:58.109930 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:21:58.109938 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:21:58.109944 | orchestrator | 2026-02-28 00:21:58.109951 | orchestrator | 2026-02-28 00:21:58.109957 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:21:58.109963 | orchestrator | Saturday 28 February 2026 00:21:57 +0000 (0:00:00.565) 0:00:07.708 ***** 2026-02-28 00:21:58.109970 | orchestrator | =============================================================================== 2026-02-28 00:21:58.109976 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.81s 2026-02-28 00:21:58.109982 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.58s 2026-02-28 00:21:58.109989 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2026-02-28 00:21:58.109995 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.51s 2026-02-28 00:21:58.110001 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-02-28 00:21:58.417102 | orchestrator | + osism apply known-hosts 2026-02-28 00:22:10.442986 | orchestrator | 2026-02-28 00:22:10 | INFO  | Task cb57c9cf-a8f7-4e53-bb0b-55454093c16f (known-hosts) was prepared for execution. 2026-02-28 00:22:10.443109 | orchestrator | 2026-02-28 00:22:10 | INFO  | It takes a moment until task cb57c9cf-a8f7-4e53-bb0b-55454093c16f (known-hosts) has been started and output is visible here. 2026-02-28 00:22:27.416003 | orchestrator | 2026-02-28 00:22:27.416116 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-28 00:22:27.416133 | orchestrator | 2026-02-28 00:22:27.416146 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-28 00:22:27.416158 | orchestrator | Saturday 28 February 2026 00:22:14 +0000 (0:00:00.176) 0:00:00.176 ***** 2026-02-28 00:22:27.416171 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-28 00:22:27.416182 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-28 00:22:27.416194 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-28 00:22:27.416205 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-28 00:22:27.416216 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-28 00:22:27.416227 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-28 00:22:27.416238 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-28 00:22:27.416249 | orchestrator | 2026-02-28 00:22:27.416260 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-28 00:22:27.416273 | orchestrator | Saturday 28 February 2026 00:22:20 +0000 (0:00:05.982) 0:00:06.158 ***** 2026-02-28 00:22:27.416285 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-28 00:22:27.416298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-28 00:22:27.416309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-28 00:22:27.416320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-28 00:22:27.416331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-28 00:22:27.416353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-28 00:22:27.416364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-28 00:22:27.416376 | orchestrator | 2026-02-28 00:22:27.416387 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:27.416398 | orchestrator | Saturday 28 February 2026 00:22:20 +0000 (0:00:00.163) 0:00:06.322 ***** 2026-02-28 00:22:27.416409 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDT4wdaD/aVwEx8n2Q56Wsq15jWiiIf2ceR/cUlK9rXx) 2026-02-28 00:22:27.416429 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDi06AuPKNRoNInXN+WkhJ12qoL5o+MqVbB1spfG2A4iu6pCf9SNAW4g2q+oWfi5ECReRPuNTwkZa/QWOObDY1q12v3Fe1xfIEdyRkHcMFUrkQRkR1yVHa34VVWyKmqqHL/rBCVfzUe5VCekkadTUEbXQGHAFd7LeTyOo/jPgl0zNFw67oM9JzIZi8mDfHDZz/K4t8Kuiqszo2moq3U2zFL2XvfNkZUP6SI4zKS6pB3s/cGVhkU8bhAy40cXAoUF/Ft0mrgkZTc2E6oGe6uA/6/+EjXS3HK0q0vIOXuuen8/yq1hdY9pqDZ5ebFjiiaoxPlZ8l88iy00+OH0BNCu0O11hER3IPlGn4MnvMTSrm8C4u007XgeZer4FF1wLQ+9md8Cx3si75dbgdfFH/So6LDB4Rre8IMpFjOyOtxi2f6pR6ysXCRLossVIlLIuGTxedFCLGyozGbsxP/bBnLZ4d2Sn0CDPedYxWdpXd/n+vkCsLUMASoSkxa2LR77ZE+hsk=) 2026-02-28 00:22:27.416468 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCb8JEmIbyiZNvwS+v4WcdOlxoYZ16Uq1rdog9Z3fabXh1VFst8xuRTUYTHXJnKXGuxbhizq7xihdwBxNaGHzUA=) 2026-02-28 00:22:27.416482 | orchestrator | 2026-02-28 00:22:27.416493 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:27.416504 | orchestrator | Saturday 28 February 2026 00:22:21 +0000 (0:00:01.215) 0:00:07.538 ***** 2026-02-28 00:22:27.416535 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKzJE6pPmEFFDoqnWf0gKFWFQfW8hOq9yNkv19AOc0Xp7zvf/YkZmKzYrhuwt0/65RpaZJkbpLiCYThh25kkqEg3/9nx10QWR/GvvhT0VVr8XNG/F8spIOTbClFESCnII8Z27yc6DUzYFq1qypKfVtCWVh6jtnYNe45HHAI/fFtqWQiT48FXI2wYRzMybyUsUU1pHoMLfjnxXp1fHXBZiYO7RNb1E814yc+bE/RLI4vFpAVTNPaWrNd1nbtzlvZ2PzTtdNfml91l0W1N6LZkcq0zzlLDng0zbjJ3dyDriqaJG9zb9Ds17znaLZkhH8VlL2eyImuMOc/PYNQhStlmJ2+SkQaMDbPXlMhUa73Rbeyu/R5NW3XC8alXsEGHLdSoqg1oYsmdmrv/WQ2I8oHFTisWNR/5G6WycqH5wh4lcpS2MnIKIOV3hFamOmitT8c/jx7DhQ5QjJBS3/IxGjiln8Tx5edzx+0TvV7atEX8k/knrJH5BGTj4Hh/XLIRRBFfU=) 2026-02-28 00:22:27.416549 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEKXZJP37wpJfkCM0k+HnIALUheAjK+7q2snLnB9hFmD) 2026-02-28 00:22:27.416562 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGNr5aPcQBq6lyiNfAaR+oezQm9mUwiZe7u1bkv5VazMg3OCCcG5wtEL9iCVFNPDDIS1pEhUQ9CASOyFsqm8BaU=) 2026-02-28 00:22:27.416575 | orchestrator | 2026-02-28 00:22:27.416587 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:27.416599 | orchestrator | Saturday 28 February 2026 00:22:23 +0000 (0:00:01.087) 0:00:08.625 ***** 2026-02-28 00:22:27.416639 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYtg483eW6RiZwzV0gXX96XsLCPCjhuL1WuaNW+VXpsY0EjipVjTcoyOQi3j+owHjH5CmudvrDWP1XUGnTvGtrL619notdfWvm3mp0bNbZhw/53rumz+dS5YzI+ZFWAeDm8YgRlpgzFWTdNTIMGLstnvEp4p4Asv0mED5kI2BnvVZRB1GB37B2Vxz5rLW0LuFvNycYArUszFBvy3wxVZCe0AJXw1o7bC1abS10uECzGRlOtKfP8sdXttsW2aQFG5xqJqhujCV6T3U0yREo6fO5PzLGP1uEYw8Fv1F36RWbAzm5mMGnOvxYpRyiOdPaey/lr4nULYfjHQVzbhpLKk3nTQTPzEFJq4vg+vvSqYcMpdXGcDUluYtvk9L9rK6OdRVcc0nihB3yO3vFCN0j640KY5It3zy3pEhK5ZzvQUQ818CbRQNtLtgdnlhE+BZBkQEzn7vmex3K3cG39+fQiXbNTlTdcoMkf+w+VGtUbVefzwe4IaLMV1XojANxToXnZyU=) 2026-02-28 00:22:27.416653 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBInVRYaH0XAlIVyNINYAnTIiQQnoprIUoxXB4JhGDQ+IzS//SOzD0s/DBkrJG+jqAYGBiYL8Xc5CDZUdGSnukCA=) 2026-02-28 00:22:27.416666 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKoKA1Di1IUhb9rWEk78LoJ4zieTZ7ioFdF53zIGhyIW) 2026-02-28 00:22:27.416683 | orchestrator | 2026-02-28 00:22:27.416702 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:27.416715 | orchestrator | Saturday 28 February 2026 00:22:24 +0000 (0:00:01.092) 0:00:09.718 ***** 2026-02-28 00:22:27.416728 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp1nuk6FX7Xcur8zyQ0nuGv/6nywCFO9/4Mz/Ln22XvrwG32ycW8DP7KRnu5i6MFsS74HV4vAYPxjVkwrROCE/63aFTL0igir1x6p5oElqVpyU/umFIWKDjzVgM2m2OBcB1G/qJ89xqhsaGD38kqyeDQXCH4+sbpckKVTQ6GZStRVDpPDohD/Gty4Bs7ywhN5wz5lDMV5KYhL3+TvoKPdun40tURBIoQ0+MkIvUUbw4/g056LJzje/nnqDfkIChfcdgJ+Xqk8VmGM+SdNarnGbagvtSDKKFVFKM1DdtxsWaFGESVYnYEW6rnoDSCkg+7tf3U2D3JczLN2J9YSxdWmlUiA80cTVG/ULVCGFM7BgMaaBwOLfEbQq6DaGnvQ3Ru1iBpf97cDCeRdZiPJCN2CI664CupkdA63BAeUyykcfFdxHrscfYtw8gNYrY3o1JoS/uX8Pn/Gl77Wg7pJB3H9F6TBKGBwIZtUPvzpw6xM66CKWSY8s9f51neLl6B9lkdM=) 2026-02-28 00:22:27.416750 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOOqDzgxr1V8Ulr49gXIgcWh28ivwfB0y+yF0kx5SG8OHqxb32kAcvpqUXSkW8cgK6Md6If91Fyody5RYwFP6RU=) 2026-02-28 00:22:27.416763 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB0Ajv2/P6u96ld/mx/B6zPe5fp/IiSJamfMLzFruTj1) 2026-02-28 00:22:27.416775 | orchestrator | 2026-02-28 00:22:27.416794 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:27.416808 | orchestrator | Saturday 28 February 2026 00:22:25 +0000 (0:00:01.075) 0:00:10.793 ***** 2026-02-28 00:22:27.416895 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKyiOCuTZaDZYXlacTQwaEWbd0JXP84zsPTi7xvEjQ2H4ICal248M4o6yso0P4wsLvPPG0jtPe/6ZtZwMmI33BkmeyPEqDunUTfvK2hEZicG96wco0WoImsv5IBy7iJB02e0ZgfbPg0gVywRtD3dhrDDPyackZeWxMw3KUZR/h18aaDrpbHPSQyA2tP6X30TbSBBwJN0ZGKrg15VBP5mVuRvSU6wY9hW7YGBHLKvHkPzRoVLaFea4hiDw/clb78eb/o0la1T0pmceiRzHeLTcJZZitGZBRWA8MBkJKWGliirq0MbQ/aXbiu3QWCU10N31yaRKOiqmIhpfuzLYLTrEnV7SMdqcZLAgW5pigp1jKYSyL5iVWI3TBb3YiEu2Xsn8NiZS1Qxz//JKpwhHw0/KNsF3tRA/nobNIbB9Rwy08K1VyjLdYCCcbDWac7l60isgv/RpO0iIbUZM9nbq4pI/pfFxlKdANnxXVLYGCihhtCzjXLa8izLQK39UZ1ef0UXU=) 2026-02-28 00:22:27.416908 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKD2RrAtUXf/7+p1c9ziaK9hK3Ar7Ui4XTQJabRGY0q6pcowVMd2llckeDdpu8VWgqbp0BpHhPOED8PMuK1sEXI=) 2026-02-28 00:22:27.416919 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICekjSX0wdu+jYVsEPrGWEW94NEWExacaNOjCiD/Ewoi) 2026-02-28 00:22:27.416929 | orchestrator | 2026-02-28 00:22:27.416941 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:27.416952 | orchestrator | Saturday 28 February 2026 00:22:26 +0000 (0:00:01.119) 0:00:11.912 ***** 2026-02-28 00:22:27.416973 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDiVTaRIf6Qe936v8l/WTR3H6bA2Thul7UQYoRw8D+C6Avic/oBTN6dbslg68+Fb0LbiQqEdQj15UorJD0sy/6JSXxjXoeHKiXD78OkQ+6fWCbMv0lVJcon8LiIHhPf80yfIFQzR4SJUFlJVRp6a5Ya3bbN1x7/2ZWIs7vRz0pN1NiwuqMMHR0Pts62JMaqxtz+xZxhN051941+1+5PT6bec4EQbqpnFFoCSQi+JqGj/tF/uIbeW7GoXDbT/wSTaDMdiHN4eJlGBYpye0jLagEwCw/cj54pkJluPTKPoQ4zOFFCj3Z3fLEEXgjotHOQ0aLH451RDzEpgIaB93TPd3mxIw2eS+u048OYxFZSxCSubjUSintWma+8pZrZ84cwoE+Vay94cKuW6ZN2NPr7UGoslz87FGWyzD7CdZZVRZZu0x3qKAIJhdVVsOBlRk2zdjKCQS0odb9yGsATBh7w0g0vezbkgNDtTau/MQSgSeOKfm3em40Sb0xbr/MZjlXhST8=) 2026-02-28 00:22:38.193772 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPUM2elyevfL/8Wk5Co7QGT2wO67TSx+7Jac4MbdAY55nGk0y0I6igfOrT9O4A3XXDAqyuVVo53PH+u56HwfK1M=) 2026-02-28 00:22:38.193959 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBFjMp7B/6w8V1uVVJL4g4IKET1LknpyDIN0Qs3ww3UB) 2026-02-28 00:22:38.193994 | orchestrator | 2026-02-28 00:22:38.194008 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:38.194083 | orchestrator | Saturday 28 February 2026 00:22:27 +0000 (0:00:01.090) 0:00:13.003 ***** 2026-02-28 00:22:38.194132 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsvlflrO9ZqfispRVIokokiiK96HDLABWZaB6ISNmZi1EPc4/oSza9BiRGeU//2bb2Zbe0g1H2UpTIAk3TRGhA9PlIYdOPbix4Azd9c6tMJWzGe7QoBTPZ4gM+xs3SVkRqqaFwhgl+IetULoyjDa+9FTcy48GvflUh9ml9Y4/X8gRDPVx9pagTvd1t1rs5Xd7MqB7Q2uIpJWT58xaBUCvPZcQjAYZESog8JtecVAXqEpihFNWKOHiQoeo1eNdEgcVhKrwJcjiN2fA+WhnmmN0l0pFNhUrHSTgj1iTh8MAfV97CLQv76cSc1BPahEzBk7B34VKcgGQWGxo7F8S/IRNxXUaiADPhQJfWHy4GQKTnrRv5MpC/6zEusM+Mhk7aBF2V75k2jy/EFKgzjglVIspOdSTxfadja7Dn+vjAaLJY9hOt/oAtC+5vrDs1PByKzu5MTrO0bhSuyRrD2Y0aXLQI+rG7RTaKBfn9iAt2EqC0kw375YL62SXW7hgz/wafa3E=) 2026-02-28 00:22:38.194149 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDZImFjYQl3E0GAGQ4N42VorFP2b5/4P6jBVcvm5WaD+9N6BBmOgVpIGY79mFMMwPOAeZ+xsfDn2t3kUAZi8kOw=) 2026-02-28 00:22:38.194188 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA8SHR4gAWoyn+wG56g6KnFclCf8Hkd831YD1ccdaLUT) 2026-02-28 00:22:38.194200 | orchestrator | 2026-02-28 00:22:38.194211 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-28 00:22:38.194223 | orchestrator | Saturday 28 February 2026 00:22:28 +0000 (0:00:01.059) 0:00:14.062 ***** 2026-02-28 00:22:38.194234 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-28 00:22:38.194246 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-28 00:22:38.194256 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-28 00:22:38.194267 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-28 00:22:38.194278 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-28 00:22:38.194289 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-28 00:22:38.194300 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-28 00:22:38.194310 | orchestrator | 2026-02-28 00:22:38.194321 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-28 00:22:38.194333 | orchestrator | Saturday 28 February 2026 00:22:33 +0000 (0:00:05.271) 0:00:19.334 ***** 2026-02-28 00:22:38.194345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-28 00:22:38.194358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-28 00:22:38.194369 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-28 00:22:38.194379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-28 00:22:38.194390 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-28 00:22:38.194401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-28 00:22:38.194411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-28 00:22:38.194422 | orchestrator | 2026-02-28 00:22:38.194433 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:38.194444 | orchestrator | Saturday 28 February 2026 00:22:33 +0000 (0:00:00.176) 0:00:19.510 ***** 2026-02-28 00:22:38.194455 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDT4wdaD/aVwEx8n2Q56Wsq15jWiiIf2ceR/cUlK9rXx) 2026-02-28 00:22:38.194495 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDi06AuPKNRoNInXN+WkhJ12qoL5o+MqVbB1spfG2A4iu6pCf9SNAW4g2q+oWfi5ECReRPuNTwkZa/QWOObDY1q12v3Fe1xfIEdyRkHcMFUrkQRkR1yVHa34VVWyKmqqHL/rBCVfzUe5VCekkadTUEbXQGHAFd7LeTyOo/jPgl0zNFw67oM9JzIZi8mDfHDZz/K4t8Kuiqszo2moq3U2zFL2XvfNkZUP6SI4zKS6pB3s/cGVhkU8bhAy40cXAoUF/Ft0mrgkZTc2E6oGe6uA/6/+EjXS3HK0q0vIOXuuen8/yq1hdY9pqDZ5ebFjiiaoxPlZ8l88iy00+OH0BNCu0O11hER3IPlGn4MnvMTSrm8C4u007XgeZer4FF1wLQ+9md8Cx3si75dbgdfFH/So6LDB4Rre8IMpFjOyOtxi2f6pR6ysXCRLossVIlLIuGTxedFCLGyozGbsxP/bBnLZ4d2Sn0CDPedYxWdpXd/n+vkCsLUMASoSkxa2LR77ZE+hsk=) 2026-02-28 00:22:38.194517 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCb8JEmIbyiZNvwS+v4WcdOlxoYZ16Uq1rdog9Z3fabXh1VFst8xuRTUYTHXJnKXGuxbhizq7xihdwBxNaGHzUA=) 2026-02-28 00:22:38.194539 | orchestrator | 2026-02-28 00:22:38.194551 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:38.194563 | orchestrator | Saturday 28 February 2026 00:22:34 +0000 (0:00:01.071) 0:00:20.581 ***** 2026-02-28 00:22:38.194574 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGNr5aPcQBq6lyiNfAaR+oezQm9mUwiZe7u1bkv5VazMg3OCCcG5wtEL9iCVFNPDDIS1pEhUQ9CASOyFsqm8BaU=) 2026-02-28 00:22:38.194586 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKzJE6pPmEFFDoqnWf0gKFWFQfW8hOq9yNkv19AOc0Xp7zvf/YkZmKzYrhuwt0/65RpaZJkbpLiCYThh25kkqEg3/9nx10QWR/GvvhT0VVr8XNG/F8spIOTbClFESCnII8Z27yc6DUzYFq1qypKfVtCWVh6jtnYNe45HHAI/fFtqWQiT48FXI2wYRzMybyUsUU1pHoMLfjnxXp1fHXBZiYO7RNb1E814yc+bE/RLI4vFpAVTNPaWrNd1nbtzlvZ2PzTtdNfml91l0W1N6LZkcq0zzlLDng0zbjJ3dyDriqaJG9zb9Ds17znaLZkhH8VlL2eyImuMOc/PYNQhStlmJ2+SkQaMDbPXlMhUa73Rbeyu/R5NW3XC8alXsEGHLdSoqg1oYsmdmrv/WQ2I8oHFTisWNR/5G6WycqH5wh4lcpS2MnIKIOV3hFamOmitT8c/jx7DhQ5QjJBS3/IxGjiln8Tx5edzx+0TvV7atEX8k/knrJH5BGTj4Hh/XLIRRBFfU=) 2026-02-28 00:22:38.194597 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEKXZJP37wpJfkCM0k+HnIALUheAjK+7q2snLnB9hFmD) 2026-02-28 00:22:38.194608 | orchestrator | 2026-02-28 00:22:38.194619 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:38.194653 | orchestrator | Saturday 28 February 2026 00:22:36 +0000 (0:00:01.081) 0:00:21.662 ***** 2026-02-28 00:22:38.194664 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBInVRYaH0XAlIVyNINYAnTIiQQnoprIUoxXB4JhGDQ+IzS//SOzD0s/DBkrJG+jqAYGBiYL8Xc5CDZUdGSnukCA=) 2026-02-28 00:22:38.194676 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYtg483eW6RiZwzV0gXX96XsLCPCjhuL1WuaNW+VXpsY0EjipVjTcoyOQi3j+owHjH5CmudvrDWP1XUGnTvGtrL619notdfWvm3mp0bNbZhw/53rumz+dS5YzI+ZFWAeDm8YgRlpgzFWTdNTIMGLstnvEp4p4Asv0mED5kI2BnvVZRB1GB37B2Vxz5rLW0LuFvNycYArUszFBvy3wxVZCe0AJXw1o7bC1abS10uECzGRlOtKfP8sdXttsW2aQFG5xqJqhujCV6T3U0yREo6fO5PzLGP1uEYw8Fv1F36RWbAzm5mMGnOvxYpRyiOdPaey/lr4nULYfjHQVzbhpLKk3nTQTPzEFJq4vg+vvSqYcMpdXGcDUluYtvk9L9rK6OdRVcc0nihB3yO3vFCN0j640KY5It3zy3pEhK5ZzvQUQ818CbRQNtLtgdnlhE+BZBkQEzn7vmex3K3cG39+fQiXbNTlTdcoMkf+w+VGtUbVefzwe4IaLMV1XojANxToXnZyU=) 2026-02-28 00:22:38.194687 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKoKA1Di1IUhb9rWEk78LoJ4zieTZ7ioFdF53zIGhyIW) 2026-02-28 00:22:38.194698 | orchestrator | 2026-02-28 00:22:38.194709 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:38.194745 | orchestrator | Saturday 28 February 2026 00:22:37 +0000 (0:00:01.065) 0:00:22.728 ***** 2026-02-28 00:22:38.194757 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOOqDzgxr1V8Ulr49gXIgcWh28ivwfB0y+yF0kx5SG8OHqxb32kAcvpqUXSkW8cgK6Md6If91Fyody5RYwFP6RU=) 2026-02-28 00:22:38.194768 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp1nuk6FX7Xcur8zyQ0nuGv/6nywCFO9/4Mz/Ln22XvrwG32ycW8DP7KRnu5i6MFsS74HV4vAYPxjVkwrROCE/63aFTL0igir1x6p5oElqVpyU/umFIWKDjzVgM2m2OBcB1G/qJ89xqhsaGD38kqyeDQXCH4+sbpckKVTQ6GZStRVDpPDohD/Gty4Bs7ywhN5wz5lDMV5KYhL3+TvoKPdun40tURBIoQ0+MkIvUUbw4/g056LJzje/nnqDfkIChfcdgJ+Xqk8VmGM+SdNarnGbagvtSDKKFVFKM1DdtxsWaFGESVYnYEW6rnoDSCkg+7tf3U2D3JczLN2J9YSxdWmlUiA80cTVG/ULVCGFM7BgMaaBwOLfEbQq6DaGnvQ3Ru1iBpf97cDCeRdZiPJCN2CI664CupkdA63BAeUyykcfFdxHrscfYtw8gNYrY3o1JoS/uX8Pn/Gl77Wg7pJB3H9F6TBKGBwIZtUPvzpw6xM66CKWSY8s9f51neLl6B9lkdM=) 2026-02-28 00:22:38.194791 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB0Ajv2/P6u96ld/mx/B6zPe5fp/IiSJamfMLzFruTj1) 2026-02-28 00:22:42.594339 | orchestrator | 2026-02-28 00:22:42.594434 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:42.594450 | orchestrator | Saturday 28 February 2026 00:22:38 +0000 (0:00:01.053) 0:00:23.781 ***** 2026-02-28 00:22:42.594463 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKyiOCuTZaDZYXlacTQwaEWbd0JXP84zsPTi7xvEjQ2H4ICal248M4o6yso0P4wsLvPPG0jtPe/6ZtZwMmI33BkmeyPEqDunUTfvK2hEZicG96wco0WoImsv5IBy7iJB02e0ZgfbPg0gVywRtD3dhrDDPyackZeWxMw3KUZR/h18aaDrpbHPSQyA2tP6X30TbSBBwJN0ZGKrg15VBP5mVuRvSU6wY9hW7YGBHLKvHkPzRoVLaFea4hiDw/clb78eb/o0la1T0pmceiRzHeLTcJZZitGZBRWA8MBkJKWGliirq0MbQ/aXbiu3QWCU10N31yaRKOiqmIhpfuzLYLTrEnV7SMdqcZLAgW5pigp1jKYSyL5iVWI3TBb3YiEu2Xsn8NiZS1Qxz//JKpwhHw0/KNsF3tRA/nobNIbB9Rwy08K1VyjLdYCCcbDWac7l60isgv/RpO0iIbUZM9nbq4pI/pfFxlKdANnxXVLYGCihhtCzjXLa8izLQK39UZ1ef0UXU=) 2026-02-28 00:22:42.594478 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKD2RrAtUXf/7+p1c9ziaK9hK3Ar7Ui4XTQJabRGY0q6pcowVMd2llckeDdpu8VWgqbp0BpHhPOED8PMuK1sEXI=) 2026-02-28 00:22:42.594491 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICekjSX0wdu+jYVsEPrGWEW94NEWExacaNOjCiD/Ewoi) 2026-02-28 00:22:42.594502 | orchestrator | 2026-02-28 00:22:42.594512 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:42.594521 | orchestrator | Saturday 28 February 2026 00:22:39 +0000 (0:00:01.030) 0:00:24.812 ***** 2026-02-28 00:22:42.594531 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPUM2elyevfL/8Wk5Co7QGT2wO67TSx+7Jac4MbdAY55nGk0y0I6igfOrT9O4A3XXDAqyuVVo53PH+u56HwfK1M=) 2026-02-28 00:22:42.594541 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBFjMp7B/6w8V1uVVJL4g4IKET1LknpyDIN0Qs3ww3UB) 2026-02-28 00:22:42.594551 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDiVTaRIf6Qe936v8l/WTR3H6bA2Thul7UQYoRw8D+C6Avic/oBTN6dbslg68+Fb0LbiQqEdQj15UorJD0sy/6JSXxjXoeHKiXD78OkQ+6fWCbMv0lVJcon8LiIHhPf80yfIFQzR4SJUFlJVRp6a5Ya3bbN1x7/2ZWIs7vRz0pN1NiwuqMMHR0Pts62JMaqxtz+xZxhN051941+1+5PT6bec4EQbqpnFFoCSQi+JqGj/tF/uIbeW7GoXDbT/wSTaDMdiHN4eJlGBYpye0jLagEwCw/cj54pkJluPTKPoQ4zOFFCj3Z3fLEEXgjotHOQ0aLH451RDzEpgIaB93TPd3mxIw2eS+u048OYxFZSxCSubjUSintWma+8pZrZ84cwoE+Vay94cKuW6ZN2NPr7UGoslz87FGWyzD7CdZZVRZZu0x3qKAIJhdVVsOBlRk2zdjKCQS0odb9yGsATBh7w0g0vezbkgNDtTau/MQSgSeOKfm3em40Sb0xbr/MZjlXhST8=) 2026-02-28 00:22:42.594562 | orchestrator | 2026-02-28 00:22:42.594571 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:42.594581 | orchestrator | Saturday 28 February 2026 00:22:40 +0000 (0:00:01.088) 0:00:25.900 ***** 2026-02-28 00:22:42.594592 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsvlflrO9ZqfispRVIokokiiK96HDLABWZaB6ISNmZi1EPc4/oSza9BiRGeU//2bb2Zbe0g1H2UpTIAk3TRGhA9PlIYdOPbix4Azd9c6tMJWzGe7QoBTPZ4gM+xs3SVkRqqaFwhgl+IetULoyjDa+9FTcy48GvflUh9ml9Y4/X8gRDPVx9pagTvd1t1rs5Xd7MqB7Q2uIpJWT58xaBUCvPZcQjAYZESog8JtecVAXqEpihFNWKOHiQoeo1eNdEgcVhKrwJcjiN2fA+WhnmmN0l0pFNhUrHSTgj1iTh8MAfV97CLQv76cSc1BPahEzBk7B34VKcgGQWGxo7F8S/IRNxXUaiADPhQJfWHy4GQKTnrRv5MpC/6zEusM+Mhk7aBF2V75k2jy/EFKgzjglVIspOdSTxfadja7Dn+vjAaLJY9hOt/oAtC+5vrDs1PByKzu5MTrO0bhSuyRrD2Y0aXLQI+rG7RTaKBfn9iAt2EqC0kw375YL62SXW7hgz/wafa3E=) 2026-02-28 00:22:42.594679 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDZImFjYQl3E0GAGQ4N42VorFP2b5/4P6jBVcvm5WaD+9N6BBmOgVpIGY79mFMMwPOAeZ+xsfDn2t3kUAZi8kOw=) 2026-02-28 00:22:42.594693 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA8SHR4gAWoyn+wG56g6KnFclCf8Hkd831YD1ccdaLUT) 2026-02-28 00:22:42.594702 | orchestrator | 2026-02-28 00:22:42.594712 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-28 00:22:42.594742 | orchestrator | Saturday 28 February 2026 00:22:41 +0000 (0:00:01.093) 0:00:26.994 ***** 2026-02-28 00:22:42.594753 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-28 00:22:42.594763 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-28 00:22:42.594772 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-28 00:22:42.594782 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-28 00:22:42.594791 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-28 00:22:42.594800 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-28 00:22:42.594810 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-28 00:22:42.594819 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:22:42.594829 | orchestrator | 2026-02-28 00:22:42.594857 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-28 00:22:42.594867 | orchestrator | Saturday 28 February 2026 00:22:41 +0000 (0:00:00.158) 0:00:27.152 ***** 2026-02-28 00:22:42.594879 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:22:42.594890 | orchestrator | 2026-02-28 00:22:42.594901 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-28 00:22:42.594912 | orchestrator | Saturday 28 February 2026 00:22:41 +0000 (0:00:00.064) 0:00:27.217 ***** 2026-02-28 00:22:42.594929 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:22:42.594940 | orchestrator | 2026-02-28 00:22:42.594951 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-28 00:22:42.594962 | orchestrator | Saturday 28 February 2026 00:22:41 +0000 (0:00:00.051) 0:00:27.268 ***** 2026-02-28 00:22:42.594973 | orchestrator | changed: [testbed-manager] 2026-02-28 00:22:42.594984 | orchestrator | 2026-02-28 00:22:42.594995 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:22:42.595007 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:22:42.595019 | orchestrator | 2026-02-28 00:22:42.595029 | orchestrator | 2026-02-28 00:22:42.595040 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:22:42.595051 | orchestrator | Saturday 28 February 2026 00:22:42 +0000 (0:00:00.713) 0:00:27.982 ***** 2026-02-28 00:22:42.595062 | orchestrator | =============================================================================== 2026-02-28 00:22:42.595073 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.98s 2026-02-28 00:22:42.595084 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.27s 2026-02-28 00:22:42.595095 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-02-28 00:22:42.595106 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-28 00:22:42.595117 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-02-28 00:22:42.595128 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-02-28 00:22:42.595139 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-02-28 00:22:42.595150 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-02-28 00:22:42.595161 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-02-28 00:22:42.595171 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-02-28 00:22:42.595183 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-02-28 00:22:42.595194 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-02-28 00:22:42.595205 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-02-28 00:22:42.595215 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-02-28 00:22:42.595232 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-02-28 00:22:42.595243 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-28 00:22:42.595253 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.71s 2026-02-28 00:22:42.595262 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-02-28 00:22:42.595273 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-02-28 00:22:42.595283 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-02-28 00:22:42.887756 | orchestrator | + osism apply squid 2026-02-28 00:22:54.891033 | orchestrator | 2026-02-28 00:22:54 | INFO  | Task 444de56e-01e0-44cf-b6f1-603714136295 (squid) was prepared for execution. 2026-02-28 00:22:54.891146 | orchestrator | 2026-02-28 00:22:54 | INFO  | It takes a moment until task 444de56e-01e0-44cf-b6f1-603714136295 (squid) has been started and output is visible here. 2026-02-28 00:24:49.176585 | orchestrator | 2026-02-28 00:24:49.176719 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-28 00:24:49.176749 | orchestrator | 2026-02-28 00:24:49.176770 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-28 00:24:49.176846 | orchestrator | Saturday 28 February 2026 00:22:59 +0000 (0:00:00.163) 0:00:00.163 ***** 2026-02-28 00:24:49.176865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:24:49.176886 | orchestrator | 2026-02-28 00:24:49.176905 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-28 00:24:49.176924 | orchestrator | Saturday 28 February 2026 00:22:59 +0000 (0:00:00.091) 0:00:00.254 ***** 2026-02-28 00:24:49.176944 | orchestrator | ok: [testbed-manager] 2026-02-28 00:24:49.176963 | orchestrator | 2026-02-28 00:24:49.176982 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-28 00:24:49.177000 | orchestrator | Saturday 28 February 2026 00:23:00 +0000 (0:00:01.509) 0:00:01.763 ***** 2026-02-28 00:24:49.177018 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-28 00:24:49.177037 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-28 00:24:49.177056 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-28 00:24:49.177074 | orchestrator | 2026-02-28 00:24:49.177092 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-28 00:24:49.177111 | orchestrator | Saturday 28 February 2026 00:23:01 +0000 (0:00:01.146) 0:00:02.909 ***** 2026-02-28 00:24:49.177131 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-28 00:24:49.177150 | orchestrator | 2026-02-28 00:24:49.177170 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-28 00:24:49.177190 | orchestrator | Saturday 28 February 2026 00:23:02 +0000 (0:00:01.119) 0:00:04.029 ***** 2026-02-28 00:24:49.177208 | orchestrator | ok: [testbed-manager] 2026-02-28 00:24:49.177227 | orchestrator | 2026-02-28 00:24:49.177246 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-28 00:24:49.177266 | orchestrator | Saturday 28 February 2026 00:23:03 +0000 (0:00:00.344) 0:00:04.374 ***** 2026-02-28 00:24:49.177286 | orchestrator | changed: [testbed-manager] 2026-02-28 00:24:49.177306 | orchestrator | 2026-02-28 00:24:49.177327 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-28 00:24:49.177346 | orchestrator | Saturday 28 February 2026 00:23:04 +0000 (0:00:00.902) 0:00:05.276 ***** 2026-02-28 00:24:49.177366 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-28 00:24:49.177392 | orchestrator | ok: [testbed-manager] 2026-02-28 00:24:49.177412 | orchestrator | 2026-02-28 00:24:49.177431 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-28 00:24:49.177486 | orchestrator | Saturday 28 February 2026 00:23:35 +0000 (0:00:31.669) 0:00:36.946 ***** 2026-02-28 00:24:49.177507 | orchestrator | changed: [testbed-manager] 2026-02-28 00:24:49.177524 | orchestrator | 2026-02-28 00:24:49.177544 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-28 00:24:49.177562 | orchestrator | Saturday 28 February 2026 00:23:47 +0000 (0:00:12.123) 0:00:49.069 ***** 2026-02-28 00:24:49.177581 | orchestrator | Pausing for 60 seconds 2026-02-28 00:24:49.177599 | orchestrator | changed: [testbed-manager] 2026-02-28 00:24:49.177617 | orchestrator | 2026-02-28 00:24:49.177636 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-28 00:24:49.177655 | orchestrator | Saturday 28 February 2026 00:24:48 +0000 (0:01:00.106) 0:01:49.175 ***** 2026-02-28 00:24:49.177674 | orchestrator | ok: [testbed-manager] 2026-02-28 00:24:49.177692 | orchestrator | 2026-02-28 00:24:49.177709 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-28 00:24:49.177727 | orchestrator | Saturday 28 February 2026 00:24:48 +0000 (0:00:00.074) 0:01:49.250 ***** 2026-02-28 00:24:49.177745 | orchestrator | changed: [testbed-manager] 2026-02-28 00:24:49.177762 | orchestrator | 2026-02-28 00:24:49.177805 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:24:49.177822 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:24:49.177842 | orchestrator | 2026-02-28 00:24:49.177860 | orchestrator | 2026-02-28 00:24:49.177879 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:24:49.177897 | orchestrator | Saturday 28 February 2026 00:24:48 +0000 (0:00:00.715) 0:01:49.966 ***** 2026-02-28 00:24:49.177916 | orchestrator | =============================================================================== 2026-02-28 00:24:49.177935 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.11s 2026-02-28 00:24:49.177952 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.67s 2026-02-28 00:24:49.177971 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.12s 2026-02-28 00:24:49.178093 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.51s 2026-02-28 00:24:49.178123 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.15s 2026-02-28 00:24:49.178141 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2026-02-28 00:24:49.178160 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.90s 2026-02-28 00:24:49.178179 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.72s 2026-02-28 00:24:49.178201 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2026-02-28 00:24:49.178222 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-02-28 00:24:49.178244 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-02-28 00:24:49.491343 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-28 00:24:49.491456 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-28 00:24:49.556547 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-28 00:24:49.556646 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-28 00:24:49.563904 | orchestrator | + set -e 2026-02-28 00:24:49.564256 | orchestrator | + NAMESPACE=kolla/release 2026-02-28 00:24:49.564354 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-28 00:24:49.571745 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-28 00:24:49.646104 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-28 00:24:49.647234 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-28 00:25:01.703961 | orchestrator | 2026-02-28 00:25:01 | INFO  | Task 25dd1661-890d-4b20-b155-fdeeb55ddfa3 (operator) was prepared for execution. 2026-02-28 00:25:01.704069 | orchestrator | 2026-02-28 00:25:01 | INFO  | It takes a moment until task 25dd1661-890d-4b20-b155-fdeeb55ddfa3 (operator) has been started and output is visible here. 2026-02-28 00:25:17.566354 | orchestrator | 2026-02-28 00:25:17.566466 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-28 00:25:17.566482 | orchestrator | 2026-02-28 00:25:17.566494 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:25:17.566506 | orchestrator | Saturday 28 February 2026 00:25:05 +0000 (0:00:00.140) 0:00:00.140 ***** 2026-02-28 00:25:17.566517 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:25:17.566529 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:25:17.566540 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:25:17.566551 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:25:17.566562 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:25:17.566573 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:25:17.566584 | orchestrator | 2026-02-28 00:25:17.566594 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-28 00:25:17.566606 | orchestrator | Saturday 28 February 2026 00:25:09 +0000 (0:00:03.351) 0:00:03.492 ***** 2026-02-28 00:25:17.566616 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:25:17.566627 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:25:17.566638 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:25:17.566665 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:25:17.566676 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:25:17.566687 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:25:17.566698 | orchestrator | 2026-02-28 00:25:17.566709 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-28 00:25:17.566720 | orchestrator | 2026-02-28 00:25:17.566731 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-28 00:25:17.566743 | orchestrator | Saturday 28 February 2026 00:25:09 +0000 (0:00:00.771) 0:00:04.264 ***** 2026-02-28 00:25:17.566753 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:25:17.566764 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:25:17.566775 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:25:17.566786 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:25:17.566836 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:25:17.566848 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:25:17.566860 | orchestrator | 2026-02-28 00:25:17.566871 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-28 00:25:17.566882 | orchestrator | Saturday 28 February 2026 00:25:10 +0000 (0:00:00.155) 0:00:04.419 ***** 2026-02-28 00:25:17.566896 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:25:17.566908 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:25:17.566920 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:25:17.566932 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:25:17.566944 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:25:17.566956 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:25:17.566969 | orchestrator | 2026-02-28 00:25:17.566981 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-28 00:25:17.566994 | orchestrator | Saturday 28 February 2026 00:25:10 +0000 (0:00:00.194) 0:00:04.613 ***** 2026-02-28 00:25:17.567007 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:17.567020 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:17.567033 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:17.567045 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:17.567058 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:17.567070 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:17.567083 | orchestrator | 2026-02-28 00:25:17.567095 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-28 00:25:17.567108 | orchestrator | Saturday 28 February 2026 00:25:10 +0000 (0:00:00.595) 0:00:05.209 ***** 2026-02-28 00:25:17.567120 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:17.567132 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:17.567144 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:17.567157 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:17.567169 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:17.567181 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:17.567216 | orchestrator | 2026-02-28 00:25:17.567229 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-28 00:25:17.567243 | orchestrator | Saturday 28 February 2026 00:25:11 +0000 (0:00:00.815) 0:00:06.024 ***** 2026-02-28 00:25:17.567254 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-28 00:25:17.567265 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-28 00:25:17.567276 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-28 00:25:17.567287 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-28 00:25:17.567298 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-28 00:25:17.567308 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-28 00:25:17.567319 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-28 00:25:17.567330 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-28 00:25:17.567341 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-28 00:25:17.567352 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-28 00:25:17.567362 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-28 00:25:17.567373 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-28 00:25:17.567384 | orchestrator | 2026-02-28 00:25:17.567395 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-28 00:25:17.567406 | orchestrator | Saturday 28 February 2026 00:25:12 +0000 (0:00:01.230) 0:00:07.255 ***** 2026-02-28 00:25:17.567417 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:17.567428 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:17.567439 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:17.567450 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:17.567461 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:17.567471 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:17.567483 | orchestrator | 2026-02-28 00:25:17.567494 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-28 00:25:17.567506 | orchestrator | Saturday 28 February 2026 00:25:14 +0000 (0:00:01.260) 0:00:08.516 ***** 2026-02-28 00:25:17.567517 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-28 00:25:17.567528 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-28 00:25:17.567539 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-28 00:25:17.567550 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:17.567579 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:17.567591 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:17.567602 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:17.567612 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:17.567623 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:17.567634 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:17.567645 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:17.567656 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:17.567666 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:17.567677 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:17.567688 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:17.567699 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:17.567710 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:17.567721 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:17.567732 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:17.567743 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:17.567762 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:17.567773 | orchestrator | 2026-02-28 00:25:17.567784 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-28 00:25:17.567817 | orchestrator | Saturday 28 February 2026 00:25:15 +0000 (0:00:01.189) 0:00:09.706 ***** 2026-02-28 00:25:17.567829 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:17.567840 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:17.567851 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:17.567862 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:17.567873 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:17.567884 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:17.567894 | orchestrator | 2026-02-28 00:25:17.567906 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-28 00:25:17.567917 | orchestrator | Saturday 28 February 2026 00:25:15 +0000 (0:00:00.152) 0:00:09.859 ***** 2026-02-28 00:25:17.567927 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:17.567938 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:17.567949 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:17.567960 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:17.567971 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:17.567982 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:17.567993 | orchestrator | 2026-02-28 00:25:17.568004 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-28 00:25:17.568015 | orchestrator | Saturday 28 February 2026 00:25:15 +0000 (0:00:00.192) 0:00:10.051 ***** 2026-02-28 00:25:17.568026 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:17.568037 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:17.568047 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:17.568058 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:17.568069 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:17.568080 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:17.568091 | orchestrator | 2026-02-28 00:25:17.568101 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-28 00:25:17.568112 | orchestrator | Saturday 28 February 2026 00:25:16 +0000 (0:00:00.613) 0:00:10.665 ***** 2026-02-28 00:25:17.568123 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:17.568134 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:17.568145 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:17.568156 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:17.568167 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:17.568177 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:17.568188 | orchestrator | 2026-02-28 00:25:17.568199 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-28 00:25:17.568210 | orchestrator | Saturday 28 February 2026 00:25:16 +0000 (0:00:00.175) 0:00:10.841 ***** 2026-02-28 00:25:17.568221 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:25:17.568241 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:17.568252 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 00:25:17.568264 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:17.568275 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-28 00:25:17.568285 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-28 00:25:17.568296 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:17.568307 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:17.568318 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 00:25:17.568329 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:17.568340 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 00:25:17.568351 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:17.568362 | orchestrator | 2026-02-28 00:25:17.568373 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-28 00:25:17.568384 | orchestrator | Saturday 28 February 2026 00:25:17 +0000 (0:00:00.704) 0:00:11.546 ***** 2026-02-28 00:25:17.568402 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:17.568413 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:17.568424 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:17.568435 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:17.568445 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:17.568456 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:17.568467 | orchestrator | 2026-02-28 00:25:17.568478 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-28 00:25:17.568489 | orchestrator | Saturday 28 February 2026 00:25:17 +0000 (0:00:00.189) 0:00:11.735 ***** 2026-02-28 00:25:17.568500 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:17.568511 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:17.568522 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:17.568533 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:17.568551 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:18.985611 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:18.985713 | orchestrator | 2026-02-28 00:25:18.985729 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-28 00:25:18.985743 | orchestrator | Saturday 28 February 2026 00:25:17 +0000 (0:00:00.159) 0:00:11.895 ***** 2026-02-28 00:25:18.985754 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:18.985766 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:18.985777 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:18.985788 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:18.985839 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:18.985851 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:18.985862 | orchestrator | 2026-02-28 00:25:18.985873 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-28 00:25:18.985884 | orchestrator | Saturday 28 February 2026 00:25:17 +0000 (0:00:00.173) 0:00:12.068 ***** 2026-02-28 00:25:18.985895 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:18.985906 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:18.985938 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:18.985950 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:18.985960 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:18.985971 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:18.985982 | orchestrator | 2026-02-28 00:25:18.985993 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-28 00:25:18.986004 | orchestrator | Saturday 28 February 2026 00:25:18 +0000 (0:00:00.632) 0:00:12.701 ***** 2026-02-28 00:25:18.986015 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:18.986086 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:18.986098 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:18.986109 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:18.986120 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:18.986131 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:18.986142 | orchestrator | 2026-02-28 00:25:18.986153 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:25:18.986166 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:18.986179 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:18.986190 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:18.986201 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:18.986212 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:18.986253 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:18.986264 | orchestrator | 2026-02-28 00:25:18.986275 | orchestrator | 2026-02-28 00:25:18.986285 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:25:18.986296 | orchestrator | Saturday 28 February 2026 00:25:18 +0000 (0:00:00.307) 0:00:13.009 ***** 2026-02-28 00:25:18.986307 | orchestrator | =============================================================================== 2026-02-28 00:25:18.986318 | orchestrator | Gathering Facts --------------------------------------------------------- 3.35s 2026-02-28 00:25:18.986329 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.26s 2026-02-28 00:25:18.986340 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.23s 2026-02-28 00:25:18.986350 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.19s 2026-02-28 00:25:18.986362 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.82s 2026-02-28 00:25:18.986373 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2026-02-28 00:25:18.986383 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2026-02-28 00:25:18.986394 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2026-02-28 00:25:18.986405 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2026-02-28 00:25:18.986415 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2026-02-28 00:25:18.986426 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.31s 2026-02-28 00:25:18.986437 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2026-02-28 00:25:18.986447 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-02-28 00:25:18.986458 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.19s 2026-02-28 00:25:18.986469 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-02-28 00:25:18.986479 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-02-28 00:25:18.986490 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-02-28 00:25:18.986501 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-02-28 00:25:18.986512 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-02-28 00:25:19.331215 | orchestrator | + osism apply --environment custom facts 2026-02-28 00:25:21.216702 | orchestrator | 2026-02-28 00:25:21 | INFO  | Trying to run play facts in environment custom 2026-02-28 00:25:31.318478 | orchestrator | 2026-02-28 00:25:31 | INFO  | Task a3e44453-88e7-4aaa-9fd0-752d4789598c (facts) was prepared for execution. 2026-02-28 00:25:31.318582 | orchestrator | 2026-02-28 00:25:31 | INFO  | It takes a moment until task a3e44453-88e7-4aaa-9fd0-752d4789598c (facts) has been started and output is visible here. 2026-02-28 00:26:15.188746 | orchestrator | 2026-02-28 00:26:15.188879 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-28 00:26:15.188892 | orchestrator | 2026-02-28 00:26:15.188901 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-28 00:26:15.188910 | orchestrator | Saturday 28 February 2026 00:25:35 +0000 (0:00:00.101) 0:00:00.101 ***** 2026-02-28 00:26:15.188919 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:15.188929 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:15.188938 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:15.188947 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:15.188955 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:15.188964 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:15.188993 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:15.189002 | orchestrator | 2026-02-28 00:26:15.189011 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-28 00:26:15.189019 | orchestrator | Saturday 28 February 2026 00:25:37 +0000 (0:00:01.390) 0:00:01.492 ***** 2026-02-28 00:26:15.189028 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:15.189036 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:15.189045 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:15.189053 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:15.189062 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:15.189070 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:15.189079 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:15.189087 | orchestrator | 2026-02-28 00:26:15.189096 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-28 00:26:15.189104 | orchestrator | 2026-02-28 00:26:15.189113 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-28 00:26:15.189121 | orchestrator | Saturday 28 February 2026 00:25:38 +0000 (0:00:01.230) 0:00:02.722 ***** 2026-02-28 00:26:15.189130 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:15.189139 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:15.189147 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:15.189155 | orchestrator | 2026-02-28 00:26:15.189164 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-28 00:26:15.189173 | orchestrator | Saturday 28 February 2026 00:25:38 +0000 (0:00:00.114) 0:00:02.836 ***** 2026-02-28 00:26:15.189181 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:15.189190 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:15.189197 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:15.189206 | orchestrator | 2026-02-28 00:26:15.189214 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-28 00:26:15.189222 | orchestrator | Saturday 28 February 2026 00:25:38 +0000 (0:00:00.223) 0:00:03.059 ***** 2026-02-28 00:26:15.189230 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:15.189238 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:15.189246 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:15.189254 | orchestrator | 2026-02-28 00:26:15.189262 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-28 00:26:15.189271 | orchestrator | Saturday 28 February 2026 00:25:38 +0000 (0:00:00.239) 0:00:03.299 ***** 2026-02-28 00:26:15.189280 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:26:15.189289 | orchestrator | 2026-02-28 00:26:15.189297 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-28 00:26:15.189305 | orchestrator | Saturday 28 February 2026 00:25:38 +0000 (0:00:00.146) 0:00:03.445 ***** 2026-02-28 00:26:15.189313 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:15.189321 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:15.189328 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:15.189336 | orchestrator | 2026-02-28 00:26:15.189344 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-28 00:26:15.189352 | orchestrator | Saturday 28 February 2026 00:25:39 +0000 (0:00:00.429) 0:00:03.874 ***** 2026-02-28 00:26:15.189359 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:15.189367 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:15.189375 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:15.189382 | orchestrator | 2026-02-28 00:26:15.189390 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-28 00:26:15.189398 | orchestrator | Saturday 28 February 2026 00:25:39 +0000 (0:00:00.151) 0:00:04.026 ***** 2026-02-28 00:26:15.189406 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:15.189414 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:15.189422 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:15.189429 | orchestrator | 2026-02-28 00:26:15.189437 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-28 00:26:15.189451 | orchestrator | Saturday 28 February 2026 00:25:40 +0000 (0:00:01.050) 0:00:05.076 ***** 2026-02-28 00:26:15.189473 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:15.189480 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:15.189487 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:15.189494 | orchestrator | 2026-02-28 00:26:15.189501 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-28 00:26:15.189508 | orchestrator | Saturday 28 February 2026 00:25:41 +0000 (0:00:00.458) 0:00:05.535 ***** 2026-02-28 00:26:15.189515 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:15.189522 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:15.189529 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:15.189536 | orchestrator | 2026-02-28 00:26:15.189543 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-28 00:26:15.189594 | orchestrator | Saturday 28 February 2026 00:25:42 +0000 (0:00:01.026) 0:00:06.561 ***** 2026-02-28 00:26:15.189601 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:15.189608 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:15.189615 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:15.189622 | orchestrator | 2026-02-28 00:26:15.189629 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-28 00:26:15.189637 | orchestrator | Saturday 28 February 2026 00:25:57 +0000 (0:00:15.360) 0:00:21.922 ***** 2026-02-28 00:26:15.189644 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:15.189651 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:15.189658 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:15.189665 | orchestrator | 2026-02-28 00:26:15.189672 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-28 00:26:15.189693 | orchestrator | Saturday 28 February 2026 00:25:57 +0000 (0:00:00.097) 0:00:22.019 ***** 2026-02-28 00:26:15.189700 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:15.189707 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:15.189714 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:15.189721 | orchestrator | 2026-02-28 00:26:15.189729 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-28 00:26:15.189739 | orchestrator | Saturday 28 February 2026 00:26:05 +0000 (0:00:07.839) 0:00:29.859 ***** 2026-02-28 00:26:15.189746 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:15.189753 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:15.189770 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:15.189777 | orchestrator | 2026-02-28 00:26:15.189784 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-28 00:26:15.189801 | orchestrator | Saturday 28 February 2026 00:26:05 +0000 (0:00:00.465) 0:00:30.324 ***** 2026-02-28 00:26:15.189808 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-28 00:26:15.189816 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-28 00:26:15.189823 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-28 00:26:15.189831 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-28 00:26:15.189892 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-28 00:26:15.189899 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-28 00:26:15.189906 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-28 00:26:15.189913 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-28 00:26:15.189931 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-28 00:26:15.189939 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-28 00:26:15.189947 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-28 00:26:15.189955 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-28 00:26:15.189971 | orchestrator | 2026-02-28 00:26:15.189978 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-28 00:26:15.190001 | orchestrator | Saturday 28 February 2026 00:26:09 +0000 (0:00:03.477) 0:00:33.802 ***** 2026-02-28 00:26:15.190089 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:15.190099 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:15.190160 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:15.190168 | orchestrator | 2026-02-28 00:26:15.190175 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-28 00:26:15.190190 | orchestrator | 2026-02-28 00:26:15.190198 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:26:15.190205 | orchestrator | Saturday 28 February 2026 00:26:10 +0000 (0:00:01.313) 0:00:35.116 ***** 2026-02-28 00:26:15.190225 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:15.190232 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:15.190240 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:15.190256 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:15.190263 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:15.190270 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:15.190287 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:15.190303 | orchestrator | 2026-02-28 00:26:15.190310 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:26:15.190318 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:26:15.190326 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:26:15.190342 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:26:15.190350 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:26:15.190357 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:26:15.190365 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:26:15.190398 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:26:15.190405 | orchestrator | 2026-02-28 00:26:15.190412 | orchestrator | 2026-02-28 00:26:15.190419 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:26:15.190436 | orchestrator | Saturday 28 February 2026 00:26:15 +0000 (0:00:04.500) 0:00:39.616 ***** 2026-02-28 00:26:15.190443 | orchestrator | =============================================================================== 2026-02-28 00:26:15.190458 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.36s 2026-02-28 00:26:15.190466 | orchestrator | Install required packages (Debian) -------------------------------------- 7.84s 2026-02-28 00:26:15.190473 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.50s 2026-02-28 00:26:15.190480 | orchestrator | Copy fact files --------------------------------------------------------- 3.48s 2026-02-28 00:26:15.190495 | orchestrator | Create custom facts directory ------------------------------------------- 1.39s 2026-02-28 00:26:15.190503 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.31s 2026-02-28 00:26:15.190532 | orchestrator | Copy fact file ---------------------------------------------------------- 1.23s 2026-02-28 00:26:15.435617 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2026-02-28 00:26:15.435715 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.03s 2026-02-28 00:26:15.435751 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2026-02-28 00:26:15.435785 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-02-28 00:26:15.435796 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-02-28 00:26:15.435807 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2026-02-28 00:26:15.435818 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2026-02-28 00:26:15.435829 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2026-02-28 00:26:15.435885 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-02-28 00:26:15.435898 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-02-28 00:26:15.435909 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-02-28 00:26:15.744829 | orchestrator | + osism apply bootstrap 2026-02-28 00:26:27.927763 | orchestrator | 2026-02-28 00:26:27 | INFO  | Task 03df13e5-78f7-456b-9ab4-36441050b67c (bootstrap) was prepared for execution. 2026-02-28 00:26:27.927837 | orchestrator | 2026-02-28 00:26:27 | INFO  | It takes a moment until task 03df13e5-78f7-456b-9ab4-36441050b67c (bootstrap) has been started and output is visible here. 2026-02-28 00:26:43.911498 | orchestrator | 2026-02-28 00:26:43.911594 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-28 00:26:43.911606 | orchestrator | 2026-02-28 00:26:43.911614 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-28 00:26:43.911622 | orchestrator | Saturday 28 February 2026 00:26:32 +0000 (0:00:00.155) 0:00:00.155 ***** 2026-02-28 00:26:43.911629 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:43.911637 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:43.911644 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:43.911650 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:43.911657 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:43.911668 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:43.911680 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:43.911690 | orchestrator | 2026-02-28 00:26:43.911697 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-28 00:26:43.911704 | orchestrator | 2026-02-28 00:26:43.911711 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:26:43.911718 | orchestrator | Saturday 28 February 2026 00:26:32 +0000 (0:00:00.288) 0:00:00.444 ***** 2026-02-28 00:26:43.911724 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:43.911731 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:43.911738 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:43.911744 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:43.911751 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:43.911761 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:43.911773 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:43.911784 | orchestrator | 2026-02-28 00:26:43.911813 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-28 00:26:43.911822 | orchestrator | 2026-02-28 00:26:43.911832 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:26:43.911853 | orchestrator | Saturday 28 February 2026 00:26:36 +0000 (0:00:03.567) 0:00:04.011 ***** 2026-02-28 00:26:43.911916 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-28 00:26:43.911924 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-28 00:26:43.911931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-28 00:26:43.911937 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-28 00:26:43.911944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:26:43.911951 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-28 00:26:43.911957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:26:43.911964 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-28 00:26:43.911971 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:26:43.911996 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-28 00:26:43.912003 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-28 00:26:43.912010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-28 00:26:43.912017 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-28 00:26:43.912023 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-28 00:26:43.912030 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-28 00:26:43.912037 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-28 00:26:43.912044 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-28 00:26:43.912050 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:43.912058 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:43.912070 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-28 00:26:43.912081 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-28 00:26:43.912092 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-28 00:26:43.912099 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-28 00:26:43.912106 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-28 00:26:43.912112 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-28 00:26:43.912119 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-28 00:26:43.912126 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-28 00:26:43.912132 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-28 00:26:43.912139 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-28 00:26:43.912146 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-28 00:26:43.912153 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-28 00:26:43.912159 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-28 00:26:43.912166 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:43.912173 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-28 00:26:43.912180 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-28 00:26:43.912186 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-28 00:26:43.912193 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-28 00:26:43.912200 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-28 00:26:43.912208 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:43.912219 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-28 00:26:43.912226 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-28 00:26:43.912233 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-28 00:26:43.912240 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-28 00:26:43.912251 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-28 00:26:43.912259 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:26:43.912265 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-28 00:26:43.912287 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-28 00:26:43.912294 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:26:43.912301 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-28 00:26:43.912308 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-28 00:26:43.912314 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:26:43.912321 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:26:43.912327 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-28 00:26:43.912334 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:26:43.912347 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-28 00:26:43.912369 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:26:43.912376 | orchestrator | 2026-02-28 00:26:43.912383 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-28 00:26:43.912390 | orchestrator | 2026-02-28 00:26:43.912396 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-28 00:26:43.912403 | orchestrator | Saturday 28 February 2026 00:26:36 +0000 (0:00:00.471) 0:00:04.483 ***** 2026-02-28 00:26:43.912410 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:43.912416 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:43.912423 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:43.912429 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:43.912436 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:43.912442 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:43.912449 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:43.912456 | orchestrator | 2026-02-28 00:26:43.912462 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-28 00:26:43.912469 | orchestrator | Saturday 28 February 2026 00:26:37 +0000 (0:00:01.227) 0:00:05.710 ***** 2026-02-28 00:26:43.912476 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:43.912482 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:43.912489 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:43.912495 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:43.912502 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:43.912508 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:43.912552 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:43.912560 | orchestrator | 2026-02-28 00:26:43.912567 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-28 00:26:43.912573 | orchestrator | Saturday 28 February 2026 00:26:39 +0000 (0:00:01.188) 0:00:06.899 ***** 2026-02-28 00:26:43.912581 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:26:43.912590 | orchestrator | 2026-02-28 00:26:43.912597 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-28 00:26:43.912604 | orchestrator | Saturday 28 February 2026 00:26:39 +0000 (0:00:00.286) 0:00:07.185 ***** 2026-02-28 00:26:43.912611 | orchestrator | changed: [testbed-manager] 2026-02-28 00:26:43.912618 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:43.912624 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:43.912631 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:43.912637 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:43.912644 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:43.912650 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:43.912657 | orchestrator | 2026-02-28 00:26:43.912664 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-28 00:26:43.912670 | orchestrator | Saturday 28 February 2026 00:26:41 +0000 (0:00:02.060) 0:00:09.245 ***** 2026-02-28 00:26:43.912677 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:43.912685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:26:43.912693 | orchestrator | 2026-02-28 00:26:43.912700 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-28 00:26:43.912706 | orchestrator | Saturday 28 February 2026 00:26:41 +0000 (0:00:00.271) 0:00:09.516 ***** 2026-02-28 00:26:43.912713 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:43.912720 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:43.912726 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:43.912733 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:43.912739 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:43.912746 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:43.912758 | orchestrator | 2026-02-28 00:26:43.912769 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-28 00:26:43.912775 | orchestrator | Saturday 28 February 2026 00:26:42 +0000 (0:00:00.987) 0:00:10.504 ***** 2026-02-28 00:26:43.912782 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:43.912789 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:43.912796 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:43.912802 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:43.912809 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:43.912815 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:43.912822 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:43.912828 | orchestrator | 2026-02-28 00:26:43.912835 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-28 00:26:43.912841 | orchestrator | Saturday 28 February 2026 00:26:43 +0000 (0:00:00.616) 0:00:11.120 ***** 2026-02-28 00:26:43.912848 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:43.912854 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:43.912878 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:43.912887 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:26:43.912898 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:26:43.912910 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:26:43.912921 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:43.912928 | orchestrator | 2026-02-28 00:26:43.912935 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-28 00:26:43.912943 | orchestrator | Saturday 28 February 2026 00:26:43 +0000 (0:00:00.463) 0:00:11.584 ***** 2026-02-28 00:26:43.912949 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:43.912956 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:43.912969 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:56.310397 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:56.310507 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:26:56.310522 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:26:56.310534 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:26:56.310545 | orchestrator | 2026-02-28 00:26:56.310558 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-28 00:26:56.310571 | orchestrator | Saturday 28 February 2026 00:26:44 +0000 (0:00:00.251) 0:00:11.836 ***** 2026-02-28 00:26:56.310585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:26:56.310613 | orchestrator | 2026-02-28 00:26:56.310625 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-28 00:26:56.310637 | orchestrator | Saturday 28 February 2026 00:26:44 +0000 (0:00:00.308) 0:00:12.144 ***** 2026-02-28 00:26:56.310648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:26:56.310659 | orchestrator | 2026-02-28 00:26:56.310671 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-28 00:26:56.310681 | orchestrator | Saturday 28 February 2026 00:26:44 +0000 (0:00:00.322) 0:00:12.467 ***** 2026-02-28 00:26:56.310692 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:56.310705 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:56.310716 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:56.310726 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:56.310738 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:56.310749 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:56.310759 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:56.310770 | orchestrator | 2026-02-28 00:26:56.310781 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-28 00:26:56.310792 | orchestrator | Saturday 28 February 2026 00:26:46 +0000 (0:00:01.564) 0:00:14.032 ***** 2026-02-28 00:26:56.310828 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:56.310840 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:56.310851 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:56.310862 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:56.310897 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:26:56.310908 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:26:56.310920 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:26:56.310934 | orchestrator | 2026-02-28 00:26:56.310946 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-28 00:26:56.310959 | orchestrator | Saturday 28 February 2026 00:26:46 +0000 (0:00:00.300) 0:00:14.333 ***** 2026-02-28 00:26:56.310972 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:56.310984 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:56.310997 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:56.311010 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:56.311021 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:56.311031 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:56.311042 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:56.311053 | orchestrator | 2026-02-28 00:26:56.311064 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-28 00:26:56.311075 | orchestrator | Saturday 28 February 2026 00:26:47 +0000 (0:00:00.553) 0:00:14.886 ***** 2026-02-28 00:26:56.311086 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:56.311097 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:56.311108 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:56.311119 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:56.311130 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:26:56.311140 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:26:56.311152 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:26:56.311163 | orchestrator | 2026-02-28 00:26:56.311174 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-28 00:26:56.311186 | orchestrator | Saturday 28 February 2026 00:26:47 +0000 (0:00:00.267) 0:00:15.154 ***** 2026-02-28 00:26:56.311197 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:56.311208 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:56.311219 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:56.311230 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:56.311241 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:56.311251 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:56.311270 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:56.311281 | orchestrator | 2026-02-28 00:26:56.311292 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-28 00:26:56.311304 | orchestrator | Saturday 28 February 2026 00:26:47 +0000 (0:00:00.551) 0:00:15.705 ***** 2026-02-28 00:26:56.311314 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:56.311325 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:56.311336 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:56.311347 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:56.311358 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:56.311368 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:56.311379 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:56.311390 | orchestrator | 2026-02-28 00:26:56.311401 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-28 00:26:56.311412 | orchestrator | Saturday 28 February 2026 00:26:49 +0000 (0:00:01.152) 0:00:16.858 ***** 2026-02-28 00:26:56.311423 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:56.311434 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:56.311445 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:56.311456 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:56.311466 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:56.311477 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:56.311488 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:56.311499 | orchestrator | 2026-02-28 00:26:56.311510 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-28 00:26:56.311529 | orchestrator | Saturday 28 February 2026 00:26:50 +0000 (0:00:01.136) 0:00:17.994 ***** 2026-02-28 00:26:56.311558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:26:56.311570 | orchestrator | 2026-02-28 00:26:56.311582 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-28 00:26:56.311592 | orchestrator | Saturday 28 February 2026 00:26:50 +0000 (0:00:00.324) 0:00:18.319 ***** 2026-02-28 00:26:56.311603 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:56.311614 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:56.311625 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:56.311635 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:56.311646 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:56.311657 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:56.311667 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:56.311678 | orchestrator | 2026-02-28 00:26:56.311689 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-28 00:26:56.311700 | orchestrator | Saturday 28 February 2026 00:26:51 +0000 (0:00:01.258) 0:00:19.577 ***** 2026-02-28 00:26:56.311710 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:56.311721 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:56.311732 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:56.311742 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:56.311753 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:56.311764 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:56.311774 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:56.311785 | orchestrator | 2026-02-28 00:26:56.311796 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-28 00:26:56.311807 | orchestrator | Saturday 28 February 2026 00:26:51 +0000 (0:00:00.247) 0:00:19.825 ***** 2026-02-28 00:26:56.311818 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:56.311829 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:56.311840 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:56.311850 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:56.311861 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:56.311910 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:56.311921 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:56.311932 | orchestrator | 2026-02-28 00:26:56.311942 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-28 00:26:56.311954 | orchestrator | Saturday 28 February 2026 00:26:52 +0000 (0:00:00.235) 0:00:20.060 ***** 2026-02-28 00:26:56.311965 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:56.311975 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:56.311986 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:56.311997 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:56.312007 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:56.312018 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:56.312029 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:56.312039 | orchestrator | 2026-02-28 00:26:56.312050 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-28 00:26:56.312061 | orchestrator | Saturday 28 February 2026 00:26:52 +0000 (0:00:00.224) 0:00:20.284 ***** 2026-02-28 00:26:56.312073 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:26:56.312086 | orchestrator | 2026-02-28 00:26:56.312097 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-28 00:26:56.312107 | orchestrator | Saturday 28 February 2026 00:26:52 +0000 (0:00:00.281) 0:00:20.566 ***** 2026-02-28 00:26:56.312118 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:56.312129 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:56.312147 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:56.312158 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:56.312169 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:56.312180 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:56.312190 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:56.312201 | orchestrator | 2026-02-28 00:26:56.312212 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-28 00:26:56.312223 | orchestrator | Saturday 28 February 2026 00:26:53 +0000 (0:00:00.551) 0:00:21.118 ***** 2026-02-28 00:26:56.312233 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:56.312244 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:56.312255 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:56.312266 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:56.312277 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:26:56.312287 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:26:56.312298 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:26:56.312309 | orchestrator | 2026-02-28 00:26:56.312320 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-28 00:26:56.312331 | orchestrator | Saturday 28 February 2026 00:26:53 +0000 (0:00:00.252) 0:00:21.371 ***** 2026-02-28 00:26:56.312342 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:56.312353 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:56.312364 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:56.312374 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:56.312385 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:56.312396 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:56.312407 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:56.312417 | orchestrator | 2026-02-28 00:26:56.312428 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-28 00:26:56.312439 | orchestrator | Saturday 28 February 2026 00:26:54 +0000 (0:00:01.093) 0:00:22.464 ***** 2026-02-28 00:26:56.312450 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:56.312461 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:56.312472 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:56.312482 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:56.312493 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:56.312504 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:56.312514 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:56.312525 | orchestrator | 2026-02-28 00:26:56.312536 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-28 00:26:56.312547 | orchestrator | Saturday 28 February 2026 00:26:55 +0000 (0:00:00.564) 0:00:23.028 ***** 2026-02-28 00:26:56.312558 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:56.312569 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:56.312580 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:56.312598 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:56.312616 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:37.361826 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:37.361991 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:37.362008 | orchestrator | 2026-02-28 00:27:37.362070 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-28 00:27:37.362082 | orchestrator | Saturday 28 February 2026 00:26:56 +0000 (0:00:01.101) 0:00:24.130 ***** 2026-02-28 00:27:37.362092 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:37.362103 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:37.362114 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:37.362123 | orchestrator | changed: [testbed-manager] 2026-02-28 00:27:37.362134 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:37.362144 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:37.362154 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:37.362163 | orchestrator | 2026-02-28 00:27:37.362173 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-28 00:27:37.362184 | orchestrator | Saturday 28 February 2026 00:27:12 +0000 (0:00:15.950) 0:00:40.080 ***** 2026-02-28 00:27:37.362193 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:37.362227 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:37.362237 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:37.362247 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:37.362256 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:37.362266 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:37.362275 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:37.362285 | orchestrator | 2026-02-28 00:27:37.362294 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-28 00:27:37.362304 | orchestrator | Saturday 28 February 2026 00:27:12 +0000 (0:00:00.255) 0:00:40.336 ***** 2026-02-28 00:27:37.362314 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:37.362324 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:37.362333 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:37.362343 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:37.362352 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:37.362362 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:37.362374 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:37.362384 | orchestrator | 2026-02-28 00:27:37.362395 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-28 00:27:37.362406 | orchestrator | Saturday 28 February 2026 00:27:12 +0000 (0:00:00.238) 0:00:40.574 ***** 2026-02-28 00:27:37.362417 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:37.362428 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:37.362439 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:37.362450 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:37.362460 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:37.362471 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:37.362483 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:37.362494 | orchestrator | 2026-02-28 00:27:37.362505 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-28 00:27:37.362516 | orchestrator | Saturday 28 February 2026 00:27:12 +0000 (0:00:00.217) 0:00:40.793 ***** 2026-02-28 00:27:37.362530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:27:37.362543 | orchestrator | 2026-02-28 00:27:37.362555 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-28 00:27:37.362566 | orchestrator | Saturday 28 February 2026 00:27:13 +0000 (0:00:00.280) 0:00:41.073 ***** 2026-02-28 00:27:37.362577 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:37.362588 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:37.362599 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:37.362609 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:37.362621 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:37.362632 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:37.362643 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:37.362654 | orchestrator | 2026-02-28 00:27:37.362665 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-28 00:27:37.362676 | orchestrator | Saturday 28 February 2026 00:27:14 +0000 (0:00:01.715) 0:00:42.789 ***** 2026-02-28 00:27:37.362687 | orchestrator | changed: [testbed-manager] 2026-02-28 00:27:37.362697 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:27:37.362708 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:27:37.362719 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:27:37.362729 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:37.362738 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:37.362748 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:37.362757 | orchestrator | 2026-02-28 00:27:37.362767 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-28 00:27:37.362791 | orchestrator | Saturday 28 February 2026 00:27:16 +0000 (0:00:01.092) 0:00:43.881 ***** 2026-02-28 00:27:37.362801 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:37.362811 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:37.362820 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:37.362837 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:37.362846 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:37.362856 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:37.362865 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:37.362875 | orchestrator | 2026-02-28 00:27:37.362884 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-28 00:27:37.362894 | orchestrator | Saturday 28 February 2026 00:27:16 +0000 (0:00:00.805) 0:00:44.687 ***** 2026-02-28 00:27:37.362919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:27:37.362931 | orchestrator | 2026-02-28 00:27:37.362941 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-28 00:27:37.362952 | orchestrator | Saturday 28 February 2026 00:27:17 +0000 (0:00:00.322) 0:00:45.009 ***** 2026-02-28 00:27:37.362961 | orchestrator | changed: [testbed-manager] 2026-02-28 00:27:37.362971 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:27:37.362980 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:27:37.362990 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:27:37.362999 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:37.363009 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:37.363018 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:37.363028 | orchestrator | 2026-02-28 00:27:37.363055 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-28 00:27:37.363065 | orchestrator | Saturday 28 February 2026 00:27:18 +0000 (0:00:01.091) 0:00:46.101 ***** 2026-02-28 00:27:37.363075 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:27:37.363085 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:27:37.363095 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:27:37.363104 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:27:37.363114 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:27:37.363123 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:27:37.363133 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:27:37.363142 | orchestrator | 2026-02-28 00:27:37.363152 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-28 00:27:37.363162 | orchestrator | Saturday 28 February 2026 00:27:18 +0000 (0:00:00.251) 0:00:46.353 ***** 2026-02-28 00:27:37.363171 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:27:37.363181 | orchestrator | 2026-02-28 00:27:37.363191 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-28 00:27:37.363201 | orchestrator | Saturday 28 February 2026 00:27:18 +0000 (0:00:00.317) 0:00:46.671 ***** 2026-02-28 00:27:37.363211 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:37.363220 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:37.363230 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:37.363239 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:37.363249 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:37.363259 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:37.363268 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:37.363278 | orchestrator | 2026-02-28 00:27:37.363287 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-28 00:27:37.363297 | orchestrator | Saturday 28 February 2026 00:27:20 +0000 (0:00:02.125) 0:00:48.796 ***** 2026-02-28 00:27:37.363307 | orchestrator | changed: [testbed-manager] 2026-02-28 00:27:37.363316 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:27:37.363326 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:27:37.363336 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:27:37.363345 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:37.363354 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:37.363364 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:37.363381 | orchestrator | 2026-02-28 00:27:37.363391 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-28 00:27:37.363401 | orchestrator | Saturday 28 February 2026 00:27:22 +0000 (0:00:01.197) 0:00:49.994 ***** 2026-02-28 00:27:37.363410 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:27:37.363420 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:27:37.363430 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:37.363439 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:27:37.363449 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:37.363458 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:37.363468 | orchestrator | changed: [testbed-manager] 2026-02-28 00:27:37.363477 | orchestrator | 2026-02-28 00:27:37.363487 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-28 00:27:37.363496 | orchestrator | Saturday 28 February 2026 00:27:34 +0000 (0:00:12.695) 0:01:02.690 ***** 2026-02-28 00:27:37.363506 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:37.363516 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:37.363525 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:37.363535 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:37.363544 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:37.363554 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:37.363563 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:37.363572 | orchestrator | 2026-02-28 00:27:37.363582 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-28 00:27:37.363592 | orchestrator | Saturday 28 February 2026 00:27:35 +0000 (0:00:00.800) 0:01:03.490 ***** 2026-02-28 00:27:37.363601 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:37.363611 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:37.363621 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:37.363630 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:37.363640 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:37.363649 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:37.363659 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:37.363668 | orchestrator | 2026-02-28 00:27:37.363678 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-28 00:27:37.363687 | orchestrator | Saturday 28 February 2026 00:27:36 +0000 (0:00:00.887) 0:01:04.377 ***** 2026-02-28 00:27:37.363702 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:37.363712 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:37.363722 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:37.363731 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:37.363741 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:37.363750 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:37.363760 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:37.363769 | orchestrator | 2026-02-28 00:27:37.363779 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-28 00:27:37.363789 | orchestrator | Saturday 28 February 2026 00:27:36 +0000 (0:00:00.247) 0:01:04.625 ***** 2026-02-28 00:27:37.363798 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:37.363808 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:37.363818 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:37.363827 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:37.363837 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:37.363846 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:37.363855 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:37.363865 | orchestrator | 2026-02-28 00:27:37.363875 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-28 00:27:37.363884 | orchestrator | Saturday 28 February 2026 00:27:37 +0000 (0:00:00.244) 0:01:04.869 ***** 2026-02-28 00:27:37.363894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:27:37.363919 | orchestrator | 2026-02-28 00:27:37.363936 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-28 00:30:07.175129 | orchestrator | Saturday 28 February 2026 00:27:37 +0000 (0:00:00.317) 0:01:05.186 ***** 2026-02-28 00:30:07.175239 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:07.175256 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:07.175268 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:07.175279 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:07.175290 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:07.175301 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:07.175311 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:07.175322 | orchestrator | 2026-02-28 00:30:07.175334 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-28 00:30:07.175346 | orchestrator | Saturday 28 February 2026 00:27:39 +0000 (0:00:01.909) 0:01:07.096 ***** 2026-02-28 00:30:07.175357 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:07.175368 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:07.175379 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:07.175390 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:07.175401 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:07.175411 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:07.175422 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:07.175433 | orchestrator | 2026-02-28 00:30:07.175444 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-28 00:30:07.175456 | orchestrator | Saturday 28 February 2026 00:27:39 +0000 (0:00:00.567) 0:01:07.663 ***** 2026-02-28 00:30:07.175466 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:07.175477 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:07.175488 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:07.175499 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:07.175510 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:07.175520 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:07.175531 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:07.175542 | orchestrator | 2026-02-28 00:30:07.175554 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-28 00:30:07.175565 | orchestrator | Saturday 28 February 2026 00:27:40 +0000 (0:00:00.281) 0:01:07.944 ***** 2026-02-28 00:30:07.175576 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:07.175587 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:07.175597 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:07.175611 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:07.175623 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:07.175635 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:07.175647 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:07.175660 | orchestrator | 2026-02-28 00:30:07.175674 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-28 00:30:07.175687 | orchestrator | Saturday 28 February 2026 00:27:41 +0000 (0:00:01.219) 0:01:09.164 ***** 2026-02-28 00:30:07.175700 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:07.175712 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:07.175725 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:07.175737 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:07.175749 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:07.175761 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:07.175773 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:07.175786 | orchestrator | 2026-02-28 00:30:07.175804 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-28 00:30:07.175816 | orchestrator | Saturday 28 February 2026 00:27:43 +0000 (0:00:01.951) 0:01:11.115 ***** 2026-02-28 00:30:07.175829 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:07.175842 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:07.175854 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:07.175866 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:07.175878 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:07.175891 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:07.175903 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:07.175915 | orchestrator | 2026-02-28 00:30:07.175927 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-28 00:30:07.175968 | orchestrator | Saturday 28 February 2026 00:27:45 +0000 (0:00:02.568) 0:01:13.684 ***** 2026-02-28 00:30:07.175982 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:07.176041 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:07.176053 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:07.176064 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:07.176074 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:07.176085 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:07.176096 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:07.176106 | orchestrator | 2026-02-28 00:30:07.176117 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-28 00:30:07.176128 | orchestrator | Saturday 28 February 2026 00:28:23 +0000 (0:00:37.366) 0:01:51.051 ***** 2026-02-28 00:30:07.176139 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:07.176150 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:07.176161 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:07.176172 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:07.176182 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:07.176193 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:07.176204 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:07.176214 | orchestrator | 2026-02-28 00:30:07.176225 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-28 00:30:07.176236 | orchestrator | Saturday 28 February 2026 00:29:50 +0000 (0:01:27.058) 0:03:18.110 ***** 2026-02-28 00:30:07.176247 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:07.176258 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:07.176269 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:07.176280 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:07.176290 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:07.176301 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:07.176311 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:07.176322 | orchestrator | 2026-02-28 00:30:07.176333 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-28 00:30:07.176344 | orchestrator | Saturday 28 February 2026 00:29:51 +0000 (0:00:01.432) 0:03:19.543 ***** 2026-02-28 00:30:07.176355 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:07.176366 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:07.176376 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:07.176387 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:07.176397 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:07.176408 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:07.176419 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:07.176429 | orchestrator | 2026-02-28 00:30:07.176440 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-28 00:30:07.176451 | orchestrator | Saturday 28 February 2026 00:30:05 +0000 (0:00:14.101) 0:03:33.644 ***** 2026-02-28 00:30:07.176499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-28 00:30:07.176535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-28 00:30:07.176562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-28 00:30:07.176575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-28 00:30:07.176586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-28 00:30:07.176597 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-28 00:30:07.176608 | orchestrator | 2026-02-28 00:30:07.176619 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-28 00:30:07.176631 | orchestrator | Saturday 28 February 2026 00:30:06 +0000 (0:00:00.430) 0:03:34.075 ***** 2026-02-28 00:30:07.176642 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-28 00:30:07.176653 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-28 00:30:07.176663 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:07.176675 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-28 00:30:07.176686 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:07.176701 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-28 00:30:07.176713 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:07.176724 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:07.176735 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 00:30:07.176746 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 00:30:07.176757 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 00:30:07.176767 | orchestrator | 2026-02-28 00:30:07.176778 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-28 00:30:07.176789 | orchestrator | Saturday 28 February 2026 00:30:07 +0000 (0:00:00.844) 0:03:34.919 ***** 2026-02-28 00:30:07.176799 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-28 00:30:07.176811 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-28 00:30:07.176822 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-28 00:30:07.176833 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-28 00:30:07.176844 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-28 00:30:07.176862 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-28 00:30:13.008186 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-28 00:30:13.008289 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-28 00:30:13.008326 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-28 00:30:13.008338 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-28 00:30:13.008348 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-28 00:30:13.008358 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-28 00:30:13.008368 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-28 00:30:13.008377 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-28 00:30:13.008387 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-28 00:30:13.008396 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-28 00:30:13.008406 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-28 00:30:13.008416 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-28 00:30:13.008425 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-28 00:30:13.008435 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-28 00:30:13.008444 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-28 00:30:13.008454 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-28 00:30:13.008463 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-28 00:30:13.008473 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-28 00:30:13.008482 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-28 00:30:13.008492 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-28 00:30:13.008501 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-28 00:30:13.008511 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-28 00:30:13.008521 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:13.008531 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-28 00:30:13.008541 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-28 00:30:13.008550 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-28 00:30:13.008560 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-28 00:30:13.008569 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:13.008579 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-28 00:30:13.008589 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-28 00:30:13.008612 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-28 00:30:13.008623 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-28 00:30:13.008632 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-28 00:30:13.008642 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-28 00:30:13.008651 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-28 00:30:13.008668 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-28 00:30:13.008678 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:13.008688 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:13.008697 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-28 00:30:13.008707 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-28 00:30:13.008716 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-28 00:30:13.008727 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-28 00:30:13.008739 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-28 00:30:13.008766 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-28 00:30:13.008778 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-28 00:30:13.008789 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-28 00:30:13.008799 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-28 00:30:13.008810 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-28 00:30:13.008820 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-28 00:30:13.008831 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-28 00:30:13.008842 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-28 00:30:13.008852 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-28 00:30:13.008864 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-28 00:30:13.008876 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-28 00:30:13.008887 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-28 00:30:13.008898 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-28 00:30:13.008910 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-28 00:30:13.008921 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-28 00:30:13.008932 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-28 00:30:13.008943 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-28 00:30:13.008954 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-28 00:30:13.008965 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-28 00:30:13.008976 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-28 00:30:13.008987 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-28 00:30:13.009019 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-28 00:30:13.009031 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-28 00:30:13.009042 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-28 00:30:13.009054 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-28 00:30:13.009074 | orchestrator | 2026-02-28 00:30:13.009087 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-28 00:30:13.009097 | orchestrator | Saturday 28 February 2026 00:30:11 +0000 (0:00:04.750) 0:03:39.669 ***** 2026-02-28 00:30:13.009107 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:13.009116 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:13.009126 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:13.009135 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:13.009150 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:13.009159 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:13.009169 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:13.009179 | orchestrator | 2026-02-28 00:30:13.009188 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-28 00:30:13.009198 | orchestrator | Saturday 28 February 2026 00:30:12 +0000 (0:00:00.604) 0:03:40.274 ***** 2026-02-28 00:30:13.009207 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:13.009217 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:13.009226 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:13.009236 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:30:13.009245 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:13.009268 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:30:13.009278 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:13.009297 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:30:13.009306 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:13.009316 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:13.009332 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:26.554782 | orchestrator | 2026-02-28 00:30:26.554910 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-28 00:30:26.554938 | orchestrator | Saturday 28 February 2026 00:30:12 +0000 (0:00:00.551) 0:03:40.826 ***** 2026-02-28 00:30:26.554957 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:26.554979 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:26.555029 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:26.555043 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:26.555054 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:26.555066 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:26.555077 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:26.555087 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:26.555098 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:26.555109 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:26.555120 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:26.555133 | orchestrator | 2026-02-28 00:30:26.555151 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-28 00:30:26.555198 | orchestrator | Saturday 28 February 2026 00:30:13 +0000 (0:00:00.599) 0:03:41.426 ***** 2026-02-28 00:30:26.555217 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-28 00:30:26.555235 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:26.555253 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-28 00:30:26.555271 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-28 00:30:26.555290 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:30:26.555308 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:30:26.555327 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-28 00:30:26.555340 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:30:26.555350 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-28 00:30:26.555361 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-28 00:30:26.555372 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-28 00:30:26.555383 | orchestrator | 2026-02-28 00:30:26.555394 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-28 00:30:26.555405 | orchestrator | Saturday 28 February 2026 00:30:14 +0000 (0:00:00.598) 0:03:42.024 ***** 2026-02-28 00:30:26.555415 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:26.555426 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:26.555437 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:26.555447 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:26.555458 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:30:26.555468 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:30:26.555479 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:30:26.555490 | orchestrator | 2026-02-28 00:30:26.555500 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-28 00:30:26.555511 | orchestrator | Saturday 28 February 2026 00:30:14 +0000 (0:00:00.381) 0:03:42.406 ***** 2026-02-28 00:30:26.555522 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:26.555534 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:26.555544 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:26.555555 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:26.555566 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:26.555577 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:26.555587 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:26.555598 | orchestrator | 2026-02-28 00:30:26.555609 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-28 00:30:26.555620 | orchestrator | Saturday 28 February 2026 00:30:20 +0000 (0:00:05.915) 0:03:48.321 ***** 2026-02-28 00:30:26.555630 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-28 00:30:26.555641 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-28 00:30:26.555652 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:26.555663 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:26.555674 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-28 00:30:26.555684 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-28 00:30:26.555695 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:26.555705 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-28 00:30:26.555717 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:26.555728 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-28 00:30:26.555756 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:30:26.555768 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:30:26.555779 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-28 00:30:26.555789 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:30:26.555800 | orchestrator | 2026-02-28 00:30:26.555820 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-28 00:30:26.555831 | orchestrator | Saturday 28 February 2026 00:30:20 +0000 (0:00:00.332) 0:03:48.653 ***** 2026-02-28 00:30:26.555842 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-28 00:30:26.555853 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-28 00:30:26.555864 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-28 00:30:26.555895 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-28 00:30:26.555906 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-28 00:30:26.555917 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-28 00:30:26.555927 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-28 00:30:26.555938 | orchestrator | 2026-02-28 00:30:26.555949 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-28 00:30:26.555960 | orchestrator | Saturday 28 February 2026 00:30:22 +0000 (0:00:01.220) 0:03:49.874 ***** 2026-02-28 00:30:26.555972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:30:26.555986 | orchestrator | 2026-02-28 00:30:26.555997 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-28 00:30:26.556035 | orchestrator | Saturday 28 February 2026 00:30:22 +0000 (0:00:00.392) 0:03:50.266 ***** 2026-02-28 00:30:26.556053 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:26.556064 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:26.556075 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:26.556086 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:26.556096 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:26.556107 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:26.556117 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:26.556128 | orchestrator | 2026-02-28 00:30:26.556139 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-28 00:30:26.556149 | orchestrator | Saturday 28 February 2026 00:30:23 +0000 (0:00:01.254) 0:03:51.521 ***** 2026-02-28 00:30:26.556160 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:26.556171 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:26.556181 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:26.556192 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:26.556202 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:26.556213 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:26.556223 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:26.556234 | orchestrator | 2026-02-28 00:30:26.556245 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-28 00:30:26.556255 | orchestrator | Saturday 28 February 2026 00:30:24 +0000 (0:00:00.606) 0:03:52.128 ***** 2026-02-28 00:30:26.556266 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:26.556277 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:26.556288 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:26.556298 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:26.556309 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:26.556319 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:26.556330 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:26.556340 | orchestrator | 2026-02-28 00:30:26.556351 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-28 00:30:26.556362 | orchestrator | Saturday 28 February 2026 00:30:24 +0000 (0:00:00.632) 0:03:52.760 ***** 2026-02-28 00:30:26.556373 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:26.556384 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:26.556394 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:26.556405 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:26.556416 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:26.556426 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:26.556436 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:26.556447 | orchestrator | 2026-02-28 00:30:26.556458 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-28 00:30:26.556476 | orchestrator | Saturday 28 February 2026 00:30:25 +0000 (0:00:00.628) 0:03:53.388 ***** 2026-02-28 00:30:26.556497 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237110.1822152, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:26.556512 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237196.0265636, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:26.556524 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237140.0289629, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:26.556559 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237137.454769, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:31.356734 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237153.5060441, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:31.356836 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237154.124792, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:31.356853 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237130.5140994, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:31.356891 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:31.356918 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:31.356931 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:31.356942 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:31.356982 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:31.356995 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:31.357049 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:31.357071 | orchestrator | 2026-02-28 00:30:31.357085 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-28 00:30:31.357098 | orchestrator | Saturday 28 February 2026 00:30:26 +0000 (0:00:00.985) 0:03:54.374 ***** 2026-02-28 00:30:31.357109 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:31.357120 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:31.357131 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:31.357142 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:31.357153 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:31.357164 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:31.357175 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:31.357186 | orchestrator | 2026-02-28 00:30:31.357197 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-28 00:30:31.357208 | orchestrator | Saturday 28 February 2026 00:30:27 +0000 (0:00:01.075) 0:03:55.449 ***** 2026-02-28 00:30:31.357219 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:31.357230 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:31.357241 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:31.357251 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:31.357262 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:31.357273 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:31.357283 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:31.357294 | orchestrator | 2026-02-28 00:30:31.357310 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-28 00:30:31.357322 | orchestrator | Saturday 28 February 2026 00:30:28 +0000 (0:00:01.154) 0:03:56.603 ***** 2026-02-28 00:30:31.357333 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:31.357344 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:31.357355 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:31.357365 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:31.357376 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:31.357387 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:31.357397 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:31.357408 | orchestrator | 2026-02-28 00:30:31.357419 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-28 00:30:31.357430 | orchestrator | Saturday 28 February 2026 00:30:29 +0000 (0:00:01.107) 0:03:57.711 ***** 2026-02-28 00:30:31.357441 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:31.357452 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:31.357463 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:31.357474 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:31.357484 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:30:31.357495 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:30:31.357506 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:30:31.357516 | orchestrator | 2026-02-28 00:30:31.357527 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-28 00:30:31.357538 | orchestrator | Saturday 28 February 2026 00:30:30 +0000 (0:00:00.303) 0:03:58.014 ***** 2026-02-28 00:30:31.357549 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:31.357561 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:31.357572 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:31.357582 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:31.357593 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:31.357604 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:31.357615 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:31.357625 | orchestrator | 2026-02-28 00:30:31.357636 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-28 00:30:31.357647 | orchestrator | Saturday 28 February 2026 00:30:30 +0000 (0:00:00.743) 0:03:58.757 ***** 2026-02-28 00:30:31.357660 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:30:31.357679 | orchestrator | 2026-02-28 00:30:31.357691 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-28 00:30:31.357709 | orchestrator | Saturday 28 February 2026 00:30:31 +0000 (0:00:00.422) 0:03:59.180 ***** 2026-02-28 00:31:48.235105 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:48.235203 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:31:48.235215 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:31:48.235219 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:31:48.235224 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:31:48.235228 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:31:48.235232 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:31:48.235236 | orchestrator | 2026-02-28 00:31:48.235241 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-28 00:31:48.235246 | orchestrator | Saturday 28 February 2026 00:30:39 +0000 (0:00:08.096) 0:04:07.276 ***** 2026-02-28 00:31:48.235250 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:48.235255 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:48.235259 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:48.235262 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:48.235266 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:48.235270 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:48.235274 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:48.235278 | orchestrator | 2026-02-28 00:31:48.235282 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-28 00:31:48.235286 | orchestrator | Saturday 28 February 2026 00:30:40 +0000 (0:00:01.348) 0:04:08.625 ***** 2026-02-28 00:31:48.235289 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:48.235295 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:48.235302 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:48.235308 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:48.235313 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:48.235319 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:48.235325 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:48.235330 | orchestrator | 2026-02-28 00:31:48.235335 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-28 00:31:48.235341 | orchestrator | Saturday 28 February 2026 00:30:41 +0000 (0:00:01.153) 0:04:09.778 ***** 2026-02-28 00:31:48.235348 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:48.235354 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:48.235360 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:48.235367 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:48.235374 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:48.235380 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:48.235388 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:48.235392 | orchestrator | 2026-02-28 00:31:48.235396 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-28 00:31:48.235401 | orchestrator | Saturday 28 February 2026 00:30:42 +0000 (0:00:00.337) 0:04:10.116 ***** 2026-02-28 00:31:48.235405 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:48.235409 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:48.235413 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:48.235417 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:48.235421 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:48.235425 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:48.235429 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:48.235432 | orchestrator | 2026-02-28 00:31:48.235436 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-28 00:31:48.235440 | orchestrator | Saturday 28 February 2026 00:30:42 +0000 (0:00:00.330) 0:04:10.446 ***** 2026-02-28 00:31:48.235444 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:48.235448 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:48.235451 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:48.235471 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:48.235475 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:48.235479 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:48.235483 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:48.235486 | orchestrator | 2026-02-28 00:31:48.235490 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-28 00:31:48.235495 | orchestrator | Saturday 28 February 2026 00:30:42 +0000 (0:00:00.333) 0:04:10.779 ***** 2026-02-28 00:31:48.235498 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:48.235502 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:48.235506 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:48.235510 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:48.235514 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:48.235518 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:48.235521 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:48.235525 | orchestrator | 2026-02-28 00:31:48.235529 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-28 00:31:48.235533 | orchestrator | Saturday 28 February 2026 00:30:48 +0000 (0:00:05.794) 0:04:16.573 ***** 2026-02-28 00:31:48.235538 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:31:48.235544 | orchestrator | 2026-02-28 00:31:48.235548 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-28 00:31:48.235552 | orchestrator | Saturday 28 February 2026 00:30:49 +0000 (0:00:00.431) 0:04:17.005 ***** 2026-02-28 00:31:48.235555 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-28 00:31:48.235559 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-28 00:31:48.235563 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-28 00:31:48.235567 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-28 00:31:48.235571 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:31:48.235587 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-28 00:31:48.235591 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-28 00:31:48.235594 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:48.235598 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-28 00:31:48.235602 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:48.235606 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-28 00:31:48.235610 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-28 00:31:48.235614 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-28 00:31:48.235617 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:31:48.235621 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-28 00:31:48.235625 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-28 00:31:48.235640 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:31:48.235644 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:31:48.235648 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-28 00:31:48.235653 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-28 00:31:48.235657 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:31:48.235661 | orchestrator | 2026-02-28 00:31:48.235666 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-28 00:31:48.235670 | orchestrator | Saturday 28 February 2026 00:30:49 +0000 (0:00:00.393) 0:04:17.399 ***** 2026-02-28 00:31:48.235675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:31:48.235679 | orchestrator | 2026-02-28 00:31:48.235683 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-28 00:31:48.235691 | orchestrator | Saturday 28 February 2026 00:30:50 +0000 (0:00:00.444) 0:04:17.844 ***** 2026-02-28 00:31:48.235696 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-28 00:31:48.235701 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-28 00:31:48.235706 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:31:48.235710 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-28 00:31:48.235714 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:48.235718 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-28 00:31:48.235723 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:48.235727 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-28 00:31:48.235731 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:31:48.235736 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-28 00:31:48.235740 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:31:48.235744 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:31:48.235748 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-28 00:31:48.235752 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:31:48.235757 | orchestrator | 2026-02-28 00:31:48.235761 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-28 00:31:48.235765 | orchestrator | Saturday 28 February 2026 00:30:50 +0000 (0:00:00.330) 0:04:18.174 ***** 2026-02-28 00:31:48.235770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:31:48.235774 | orchestrator | 2026-02-28 00:31:48.235778 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-28 00:31:48.235782 | orchestrator | Saturday 28 February 2026 00:30:50 +0000 (0:00:00.446) 0:04:18.620 ***** 2026-02-28 00:31:48.235787 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:31:48.235791 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:31:48.235795 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:31:48.235800 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:31:48.235807 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:31:48.235811 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:31:48.235816 | orchestrator | changed: [testbed-manager] 2026-02-28 00:31:48.235820 | orchestrator | 2026-02-28 00:31:48.235824 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-28 00:31:48.235828 | orchestrator | Saturday 28 February 2026 00:31:25 +0000 (0:00:34.674) 0:04:53.295 ***** 2026-02-28 00:31:48.235832 | orchestrator | changed: [testbed-manager] 2026-02-28 00:31:48.235837 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:31:48.235841 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:31:48.235845 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:31:48.235850 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:31:48.235854 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:31:48.235858 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:31:48.235862 | orchestrator | 2026-02-28 00:31:48.235867 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-28 00:31:48.235871 | orchestrator | Saturday 28 February 2026 00:31:33 +0000 (0:00:07.756) 0:05:01.052 ***** 2026-02-28 00:31:48.235875 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:31:48.235879 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:31:48.235883 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:31:48.235888 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:31:48.235892 | orchestrator | changed: [testbed-manager] 2026-02-28 00:31:48.235896 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:31:48.235900 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:31:48.235905 | orchestrator | 2026-02-28 00:31:48.235909 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-28 00:31:48.235917 | orchestrator | Saturday 28 February 2026 00:31:40 +0000 (0:00:07.603) 0:05:08.655 ***** 2026-02-28 00:31:48.235921 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:48.235926 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:48.235930 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:48.235934 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:48.235939 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:48.235943 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:48.235947 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:48.235951 | orchestrator | 2026-02-28 00:31:48.235956 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-28 00:31:48.235960 | orchestrator | Saturday 28 February 2026 00:31:42 +0000 (0:00:01.663) 0:05:10.319 ***** 2026-02-28 00:31:48.235964 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:31:48.235968 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:31:48.235973 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:31:48.235977 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:31:48.235981 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:31:48.235986 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:31:48.235990 | orchestrator | changed: [testbed-manager] 2026-02-28 00:31:48.235994 | orchestrator | 2026-02-28 00:31:48.236002 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-28 00:31:59.972982 | orchestrator | Saturday 28 February 2026 00:31:48 +0000 (0:00:05.731) 0:05:16.050 ***** 2026-02-28 00:31:59.973141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:31:59.973162 | orchestrator | 2026-02-28 00:31:59.973175 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-28 00:31:59.973187 | orchestrator | Saturday 28 February 2026 00:31:48 +0000 (0:00:00.446) 0:05:16.497 ***** 2026-02-28 00:31:59.973209 | orchestrator | changed: [testbed-manager] 2026-02-28 00:31:59.973222 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:31:59.973234 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:31:59.973245 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:31:59.973255 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:31:59.973266 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:31:59.973278 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:31:59.973289 | orchestrator | 2026-02-28 00:31:59.973300 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-28 00:31:59.973311 | orchestrator | Saturday 28 February 2026 00:31:49 +0000 (0:00:00.765) 0:05:17.262 ***** 2026-02-28 00:31:59.973322 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:59.973334 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:59.973345 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:59.973356 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:59.973367 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:59.973377 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:59.973388 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:59.973399 | orchestrator | 2026-02-28 00:31:59.973410 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-28 00:31:59.973421 | orchestrator | Saturday 28 February 2026 00:31:51 +0000 (0:00:01.712) 0:05:18.975 ***** 2026-02-28 00:31:59.973432 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:31:59.973443 | orchestrator | changed: [testbed-manager] 2026-02-28 00:31:59.973454 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:31:59.973465 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:31:59.973475 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:31:59.973487 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:31:59.973498 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:31:59.973509 | orchestrator | 2026-02-28 00:31:59.973522 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-28 00:31:59.973534 | orchestrator | Saturday 28 February 2026 00:31:51 +0000 (0:00:00.748) 0:05:19.723 ***** 2026-02-28 00:31:59.973571 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:31:59.973585 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:59.973597 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:59.973610 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:31:59.973622 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:31:59.973634 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:31:59.973645 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:31:59.973658 | orchestrator | 2026-02-28 00:31:59.973671 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-28 00:31:59.973683 | orchestrator | Saturday 28 February 2026 00:31:52 +0000 (0:00:00.315) 0:05:20.039 ***** 2026-02-28 00:31:59.973695 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:31:59.973707 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:59.973719 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:59.973746 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:31:59.973759 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:31:59.973771 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:31:59.973783 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:31:59.973795 | orchestrator | 2026-02-28 00:31:59.973807 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-28 00:31:59.973820 | orchestrator | Saturday 28 February 2026 00:31:52 +0000 (0:00:00.502) 0:05:20.542 ***** 2026-02-28 00:31:59.973831 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:59.973843 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:59.973856 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:59.973868 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:59.973880 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:59.973890 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:59.973901 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:59.973911 | orchestrator | 2026-02-28 00:31:59.973922 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-28 00:31:59.973933 | orchestrator | Saturday 28 February 2026 00:31:53 +0000 (0:00:00.341) 0:05:20.883 ***** 2026-02-28 00:31:59.973944 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:31:59.973955 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:59.973966 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:59.973977 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:31:59.973988 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:31:59.973998 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:31:59.974009 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:31:59.974099 | orchestrator | 2026-02-28 00:31:59.974112 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-28 00:31:59.974124 | orchestrator | Saturday 28 February 2026 00:31:53 +0000 (0:00:00.323) 0:05:21.207 ***** 2026-02-28 00:31:59.974135 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:59.974145 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:59.974156 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:59.974167 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:59.974178 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:59.974188 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:59.974199 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:59.974210 | orchestrator | 2026-02-28 00:31:59.974221 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-28 00:31:59.974232 | orchestrator | Saturday 28 February 2026 00:31:53 +0000 (0:00:00.324) 0:05:21.531 ***** 2026-02-28 00:31:59.974243 | orchestrator | ok: [testbed-manager] =>  2026-02-28 00:31:59.974254 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:31:59.974264 | orchestrator | ok: [testbed-node-3] =>  2026-02-28 00:31:59.974275 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:31:59.974286 | orchestrator | ok: [testbed-node-4] =>  2026-02-28 00:31:59.974297 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:31:59.974308 | orchestrator | ok: [testbed-node-5] =>  2026-02-28 00:31:59.974318 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:31:59.974348 | orchestrator | ok: [testbed-node-0] =>  2026-02-28 00:31:59.974369 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:31:59.974380 | orchestrator | ok: [testbed-node-1] =>  2026-02-28 00:31:59.974391 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:31:59.974401 | orchestrator | ok: [testbed-node-2] =>  2026-02-28 00:31:59.974412 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:31:59.974423 | orchestrator | 2026-02-28 00:31:59.974434 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-28 00:31:59.974445 | orchestrator | Saturday 28 February 2026 00:31:54 +0000 (0:00:00.305) 0:05:21.837 ***** 2026-02-28 00:31:59.974456 | orchestrator | ok: [testbed-manager] =>  2026-02-28 00:31:59.974466 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:31:59.974477 | orchestrator | ok: [testbed-node-3] =>  2026-02-28 00:31:59.974488 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:31:59.974498 | orchestrator | ok: [testbed-node-4] =>  2026-02-28 00:31:59.974509 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:31:59.974520 | orchestrator | ok: [testbed-node-5] =>  2026-02-28 00:31:59.974530 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:31:59.974541 | orchestrator | ok: [testbed-node-0] =>  2026-02-28 00:31:59.974552 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:31:59.974562 | orchestrator | ok: [testbed-node-1] =>  2026-02-28 00:31:59.974573 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:31:59.974584 | orchestrator | ok: [testbed-node-2] =>  2026-02-28 00:31:59.974595 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:31:59.974606 | orchestrator | 2026-02-28 00:31:59.974617 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-28 00:31:59.974628 | orchestrator | Saturday 28 February 2026 00:31:54 +0000 (0:00:00.356) 0:05:22.194 ***** 2026-02-28 00:31:59.974639 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:31:59.974649 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:59.974660 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:59.974671 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:31:59.974681 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:31:59.974692 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:31:59.974703 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:31:59.974714 | orchestrator | 2026-02-28 00:31:59.974725 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-28 00:31:59.974736 | orchestrator | Saturday 28 February 2026 00:31:54 +0000 (0:00:00.311) 0:05:22.505 ***** 2026-02-28 00:31:59.974746 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:31:59.974757 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:59.974768 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:59.974779 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:31:59.974789 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:31:59.974800 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:31:59.974811 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:31:59.974821 | orchestrator | 2026-02-28 00:31:59.974832 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-28 00:31:59.974843 | orchestrator | Saturday 28 February 2026 00:31:54 +0000 (0:00:00.328) 0:05:22.834 ***** 2026-02-28 00:31:59.974857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:31:59.974869 | orchestrator | 2026-02-28 00:31:59.974886 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-28 00:31:59.974898 | orchestrator | Saturday 28 February 2026 00:31:55 +0000 (0:00:00.483) 0:05:23.318 ***** 2026-02-28 00:31:59.974909 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:59.974920 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:59.974931 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:59.974941 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:59.974952 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:59.974969 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:59.974980 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:59.974991 | orchestrator | 2026-02-28 00:31:59.975002 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-28 00:31:59.975013 | orchestrator | Saturday 28 February 2026 00:31:56 +0000 (0:00:01.021) 0:05:24.339 ***** 2026-02-28 00:31:59.975023 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:59.975034 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:59.975045 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:59.975113 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:59.975125 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:59.975136 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:59.975146 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:59.975157 | orchestrator | 2026-02-28 00:31:59.975168 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-28 00:31:59.975181 | orchestrator | Saturday 28 February 2026 00:31:59 +0000 (0:00:02.985) 0:05:27.325 ***** 2026-02-28 00:31:59.975192 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-28 00:31:59.975204 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-28 00:31:59.975215 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-28 00:31:59.975226 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-28 00:31:59.975237 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-28 00:31:59.975248 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:31:59.975259 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-28 00:31:59.975270 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-28 00:31:59.975281 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-28 00:31:59.975292 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-28 00:31:59.975303 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:59.975314 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-28 00:31:59.975324 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-28 00:31:59.975335 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-28 00:31:59.975346 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:59.975357 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-28 00:31:59.975376 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:00.973206 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-28 00:33:00.973315 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-28 00:33:00.973330 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-28 00:33:00.973342 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-28 00:33:00.973353 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-28 00:33:00.973364 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:00.973376 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:00.973388 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-28 00:33:00.973398 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-28 00:33:00.973409 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-28 00:33:00.973420 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:00.973431 | orchestrator | 2026-02-28 00:33:00.973443 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-28 00:33:00.973455 | orchestrator | Saturday 28 February 2026 00:32:00 +0000 (0:00:00.621) 0:05:27.947 ***** 2026-02-28 00:33:00.973466 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:00.973477 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:00.973488 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:00.973498 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:00.973510 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:00.973521 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:00.973557 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:00.973568 | orchestrator | 2026-02-28 00:33:00.973580 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-28 00:33:00.973590 | orchestrator | Saturday 28 February 2026 00:32:06 +0000 (0:00:06.272) 0:05:34.219 ***** 2026-02-28 00:33:00.973601 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:00.973612 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:00.973622 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:00.973633 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:00.973644 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:00.973654 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:00.973665 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:00.973675 | orchestrator | 2026-02-28 00:33:00.973686 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-28 00:33:00.973700 | orchestrator | Saturday 28 February 2026 00:32:07 +0000 (0:00:01.043) 0:05:35.263 ***** 2026-02-28 00:33:00.973713 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:00.973726 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:00.973738 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:00.973751 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:00.973763 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:00.973775 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:00.973787 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:00.973799 | orchestrator | 2026-02-28 00:33:00.973812 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-28 00:33:00.973825 | orchestrator | Saturday 28 February 2026 00:32:15 +0000 (0:00:08.468) 0:05:43.731 ***** 2026-02-28 00:33:00.973837 | orchestrator | changed: [testbed-manager] 2026-02-28 00:33:00.973849 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:00.973862 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:00.973874 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:00.973886 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:00.973898 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:00.973910 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:00.973924 | orchestrator | 2026-02-28 00:33:00.973936 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-28 00:33:00.973949 | orchestrator | Saturday 28 February 2026 00:32:19 +0000 (0:00:03.334) 0:05:47.065 ***** 2026-02-28 00:33:00.973961 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:00.973974 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:00.973987 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:00.973999 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:00.974011 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:00.974118 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:00.974129 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:00.974139 | orchestrator | 2026-02-28 00:33:00.974150 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-28 00:33:00.974161 | orchestrator | Saturday 28 February 2026 00:32:20 +0000 (0:00:01.363) 0:05:48.429 ***** 2026-02-28 00:33:00.974172 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:00.974183 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:00.974193 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:00.974204 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:00.974214 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:00.974225 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:00.974236 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:00.974247 | orchestrator | 2026-02-28 00:33:00.974257 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-28 00:33:00.974268 | orchestrator | Saturday 28 February 2026 00:32:22 +0000 (0:00:01.626) 0:05:50.055 ***** 2026-02-28 00:33:00.974279 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:00.974289 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:00.974300 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:00.974311 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:00.974331 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:00.974341 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:00.974352 | orchestrator | changed: [testbed-manager] 2026-02-28 00:33:00.974363 | orchestrator | 2026-02-28 00:33:00.974374 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-28 00:33:00.974385 | orchestrator | Saturday 28 February 2026 00:32:22 +0000 (0:00:00.578) 0:05:50.633 ***** 2026-02-28 00:33:00.974395 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:00.974406 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:00.974417 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:00.974427 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:00.974438 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:00.974448 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:00.974459 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:00.974470 | orchestrator | 2026-02-28 00:33:00.974481 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-28 00:33:00.974511 | orchestrator | Saturday 28 February 2026 00:32:32 +0000 (0:00:09.406) 0:06:00.040 ***** 2026-02-28 00:33:00.974523 | orchestrator | changed: [testbed-manager] 2026-02-28 00:33:00.974534 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:00.974544 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:00.974555 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:00.974565 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:00.974575 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:00.974586 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:00.974597 | orchestrator | 2026-02-28 00:33:00.974608 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-28 00:33:00.974619 | orchestrator | Saturday 28 February 2026 00:32:33 +0000 (0:00:00.923) 0:06:00.964 ***** 2026-02-28 00:33:00.974629 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:00.974640 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:00.974651 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:00.974661 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:00.974672 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:00.974682 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:00.974693 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:00.974704 | orchestrator | 2026-02-28 00:33:00.974714 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-28 00:33:00.974725 | orchestrator | Saturday 28 February 2026 00:32:42 +0000 (0:00:08.984) 0:06:09.948 ***** 2026-02-28 00:33:00.974736 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:00.974747 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:00.974757 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:00.974768 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:00.974778 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:00.974789 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:00.974799 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:00.974810 | orchestrator | 2026-02-28 00:33:00.974821 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-28 00:33:00.974831 | orchestrator | Saturday 28 February 2026 00:32:53 +0000 (0:00:11.776) 0:06:21.725 ***** 2026-02-28 00:33:00.974842 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-28 00:33:00.974853 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-28 00:33:00.974864 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-28 00:33:00.974874 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-28 00:33:00.974885 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-28 00:33:00.974896 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-28 00:33:00.974906 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-28 00:33:00.975287 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-28 00:33:00.975302 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-28 00:33:00.975323 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-28 00:33:00.975334 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-28 00:33:00.975405 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-28 00:33:00.975419 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-28 00:33:00.975429 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-28 00:33:00.975440 | orchestrator | 2026-02-28 00:33:00.975451 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-28 00:33:00.975462 | orchestrator | Saturday 28 February 2026 00:32:55 +0000 (0:00:01.307) 0:06:23.033 ***** 2026-02-28 00:33:00.975478 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:00.975489 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:00.975500 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:00.975510 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:00.975521 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:00.975532 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:00.975542 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:00.975553 | orchestrator | 2026-02-28 00:33:00.975564 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-28 00:33:00.975575 | orchestrator | Saturday 28 February 2026 00:32:55 +0000 (0:00:00.547) 0:06:23.580 ***** 2026-02-28 00:33:00.975586 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:00.975597 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:00.975607 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:00.975618 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:00.975629 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:00.975639 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:00.975650 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:00.975661 | orchestrator | 2026-02-28 00:33:00.975671 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-28 00:33:00.975684 | orchestrator | Saturday 28 February 2026 00:32:59 +0000 (0:00:04.230) 0:06:27.811 ***** 2026-02-28 00:33:00.975695 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:00.975705 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:00.975716 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:00.975727 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:00.975737 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:00.975748 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:00.975758 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:00.975769 | orchestrator | 2026-02-28 00:33:00.975780 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-28 00:33:00.975792 | orchestrator | Saturday 28 February 2026 00:33:00 +0000 (0:00:00.508) 0:06:28.319 ***** 2026-02-28 00:33:00.975802 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-28 00:33:00.975813 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-28 00:33:00.975824 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:00.975835 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-28 00:33:00.975845 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-28 00:33:00.975856 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:00.975866 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-28 00:33:00.975877 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-28 00:33:00.975889 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:00.975911 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-28 00:33:20.874373 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-28 00:33:20.874479 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:20.874496 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-28 00:33:20.874508 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-28 00:33:20.874519 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:20.874550 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-28 00:33:20.874562 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-28 00:33:20.874573 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:20.874584 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-28 00:33:20.874594 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-28 00:33:20.874605 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:20.874617 | orchestrator | 2026-02-28 00:33:20.874630 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-28 00:33:20.874642 | orchestrator | Saturday 28 February 2026 00:33:01 +0000 (0:00:00.735) 0:06:29.055 ***** 2026-02-28 00:33:20.874653 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:20.874664 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:20.874675 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:20.874685 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:20.874696 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:20.874707 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:20.874717 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:20.874728 | orchestrator | 2026-02-28 00:33:20.874739 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-28 00:33:20.874750 | orchestrator | Saturday 28 February 2026 00:33:01 +0000 (0:00:00.616) 0:06:29.671 ***** 2026-02-28 00:33:20.874761 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:20.874772 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:20.874782 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:20.874793 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:20.874804 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:20.874814 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:20.874825 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:20.874836 | orchestrator | 2026-02-28 00:33:20.874847 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-28 00:33:20.874858 | orchestrator | Saturday 28 February 2026 00:33:02 +0000 (0:00:00.535) 0:06:30.207 ***** 2026-02-28 00:33:20.874868 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:20.874879 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:20.874890 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:20.874901 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:20.874911 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:20.874924 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:20.874937 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:20.874949 | orchestrator | 2026-02-28 00:33:20.874961 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-28 00:33:20.874973 | orchestrator | Saturday 28 February 2026 00:33:02 +0000 (0:00:00.525) 0:06:30.732 ***** 2026-02-28 00:33:20.874986 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:20.874998 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:20.875010 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:20.875022 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:20.875035 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:20.875046 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:20.875059 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:20.875071 | orchestrator | 2026-02-28 00:33:20.875108 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-28 00:33:20.875123 | orchestrator | Saturday 28 February 2026 00:33:04 +0000 (0:00:02.054) 0:06:32.787 ***** 2026-02-28 00:33:20.875134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:33:20.875148 | orchestrator | 2026-02-28 00:33:20.875159 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-28 00:33:20.875170 | orchestrator | Saturday 28 February 2026 00:33:05 +0000 (0:00:00.929) 0:06:33.717 ***** 2026-02-28 00:33:20.875196 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:20.875207 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:20.875218 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:20.875229 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:20.875239 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:20.875250 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:20.875260 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:20.875271 | orchestrator | 2026-02-28 00:33:20.875281 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-28 00:33:20.875292 | orchestrator | Saturday 28 February 2026 00:33:06 +0000 (0:00:00.855) 0:06:34.572 ***** 2026-02-28 00:33:20.875303 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:20.875313 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:20.875324 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:20.875334 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:20.875345 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:20.875355 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:20.875366 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:20.875376 | orchestrator | 2026-02-28 00:33:20.875387 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-28 00:33:20.875398 | orchestrator | Saturday 28 February 2026 00:33:07 +0000 (0:00:00.888) 0:06:35.461 ***** 2026-02-28 00:33:20.875408 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:20.875419 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:20.875429 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:20.875440 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:20.875450 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:20.875461 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:20.875472 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:20.875482 | orchestrator | 2026-02-28 00:33:20.875493 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-28 00:33:20.875519 | orchestrator | Saturday 28 February 2026 00:33:09 +0000 (0:00:01.597) 0:06:37.058 ***** 2026-02-28 00:33:20.875530 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:20.875541 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:20.875552 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:20.875563 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:20.875574 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:20.875585 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:20.875595 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:20.875606 | orchestrator | 2026-02-28 00:33:20.875617 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-28 00:33:20.875628 | orchestrator | Saturday 28 February 2026 00:33:10 +0000 (0:00:01.396) 0:06:38.455 ***** 2026-02-28 00:33:20.875639 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:20.875649 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:20.875660 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:20.875671 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:20.875682 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:20.875692 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:20.875703 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:20.875713 | orchestrator | 2026-02-28 00:33:20.875724 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-28 00:33:20.875735 | orchestrator | Saturday 28 February 2026 00:33:11 +0000 (0:00:01.335) 0:06:39.790 ***** 2026-02-28 00:33:20.875746 | orchestrator | changed: [testbed-manager] 2026-02-28 00:33:20.875756 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:20.875767 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:20.875778 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:20.875788 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:20.875799 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:20.875810 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:20.875820 | orchestrator | 2026-02-28 00:33:20.875838 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-28 00:33:20.875849 | orchestrator | Saturday 28 February 2026 00:33:13 +0000 (0:00:01.412) 0:06:41.203 ***** 2026-02-28 00:33:20.875860 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:33:20.875871 | orchestrator | 2026-02-28 00:33:20.875882 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-28 00:33:20.875892 | orchestrator | Saturday 28 February 2026 00:33:14 +0000 (0:00:01.125) 0:06:42.328 ***** 2026-02-28 00:33:20.875903 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:20.875914 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:20.875924 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:20.875935 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:20.875946 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:20.875956 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:20.875967 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:20.875977 | orchestrator | 2026-02-28 00:33:20.875988 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-28 00:33:20.875999 | orchestrator | Saturday 28 February 2026 00:33:15 +0000 (0:00:01.367) 0:06:43.696 ***** 2026-02-28 00:33:20.876010 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:20.876021 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:20.876031 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:20.876041 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:20.876052 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:20.876071 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:20.876082 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:20.876135 | orchestrator | 2026-02-28 00:33:20.876146 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-28 00:33:20.876157 | orchestrator | Saturday 28 February 2026 00:33:17 +0000 (0:00:01.207) 0:06:44.904 ***** 2026-02-28 00:33:20.876168 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:20.876179 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:20.876189 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:20.876200 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:20.876210 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:20.876221 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:20.876231 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:20.876241 | orchestrator | 2026-02-28 00:33:20.876252 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-28 00:33:20.876263 | orchestrator | Saturday 28 February 2026 00:33:18 +0000 (0:00:01.179) 0:06:46.083 ***** 2026-02-28 00:33:20.876273 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:20.876284 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:20.876294 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:20.876305 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:20.876315 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:20.876325 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:20.876336 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:20.876346 | orchestrator | 2026-02-28 00:33:20.876357 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-28 00:33:20.876368 | orchestrator | Saturday 28 February 2026 00:33:19 +0000 (0:00:01.323) 0:06:47.407 ***** 2026-02-28 00:33:20.876378 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:33:20.876389 | orchestrator | 2026-02-28 00:33:20.876400 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:20.876410 | orchestrator | Saturday 28 February 2026 00:33:20 +0000 (0:00:00.962) 0:06:48.370 ***** 2026-02-28 00:33:20.876421 | orchestrator | 2026-02-28 00:33:20.876432 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:20.876450 | orchestrator | Saturday 28 February 2026 00:33:20 +0000 (0:00:00.047) 0:06:48.417 ***** 2026-02-28 00:33:20.876460 | orchestrator | 2026-02-28 00:33:20.876471 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:20.876482 | orchestrator | Saturday 28 February 2026 00:33:20 +0000 (0:00:00.055) 0:06:48.472 ***** 2026-02-28 00:33:20.876492 | orchestrator | 2026-02-28 00:33:20.876503 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:20.876521 | orchestrator | Saturday 28 February 2026 00:33:20 +0000 (0:00:00.051) 0:06:48.524 ***** 2026-02-28 00:33:47.020042 | orchestrator | 2026-02-28 00:33:47.020196 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:47.020214 | orchestrator | Saturday 28 February 2026 00:33:20 +0000 (0:00:00.042) 0:06:48.566 ***** 2026-02-28 00:33:47.020225 | orchestrator | 2026-02-28 00:33:47.020237 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:47.020248 | orchestrator | Saturday 28 February 2026 00:33:20 +0000 (0:00:00.046) 0:06:48.613 ***** 2026-02-28 00:33:47.020259 | orchestrator | 2026-02-28 00:33:47.020270 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:47.020281 | orchestrator | Saturday 28 February 2026 00:33:20 +0000 (0:00:00.039) 0:06:48.652 ***** 2026-02-28 00:33:47.020292 | orchestrator | 2026-02-28 00:33:47.020303 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-28 00:33:47.020314 | orchestrator | Saturday 28 February 2026 00:33:20 +0000 (0:00:00.040) 0:06:48.693 ***** 2026-02-28 00:33:47.020324 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:47.020337 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:47.020347 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:47.020358 | orchestrator | 2026-02-28 00:33:47.020369 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-28 00:33:47.020380 | orchestrator | Saturday 28 February 2026 00:33:22 +0000 (0:00:01.160) 0:06:49.853 ***** 2026-02-28 00:33:47.020391 | orchestrator | changed: [testbed-manager] 2026-02-28 00:33:47.020402 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:47.020413 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:47.020424 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:47.020435 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:47.020445 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:47.020456 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:47.020467 | orchestrator | 2026-02-28 00:33:47.020478 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-28 00:33:47.020488 | orchestrator | Saturday 28 February 2026 00:33:23 +0000 (0:00:01.476) 0:06:51.329 ***** 2026-02-28 00:33:47.020499 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:47.020510 | orchestrator | changed: [testbed-manager] 2026-02-28 00:33:47.020521 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:47.020531 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:47.020542 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:47.020552 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:47.020563 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:47.020575 | orchestrator | 2026-02-28 00:33:47.020587 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-28 00:33:47.020600 | orchestrator | Saturday 28 February 2026 00:33:24 +0000 (0:00:01.170) 0:06:52.500 ***** 2026-02-28 00:33:47.020613 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:47.020625 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:47.020638 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:47.020650 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:47.020662 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:47.020674 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:47.020686 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:47.020698 | orchestrator | 2026-02-28 00:33:47.020711 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-28 00:33:47.020723 | orchestrator | Saturday 28 February 2026 00:33:27 +0000 (0:00:02.485) 0:06:54.985 ***** 2026-02-28 00:33:47.020777 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:47.020792 | orchestrator | 2026-02-28 00:33:47.020805 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-28 00:33:47.020818 | orchestrator | Saturday 28 February 2026 00:33:27 +0000 (0:00:00.109) 0:06:55.095 ***** 2026-02-28 00:33:47.020830 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:47.020842 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:47.020854 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:47.020866 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:47.020883 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:47.020902 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:47.020917 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:47.020928 | orchestrator | 2026-02-28 00:33:47.020939 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-28 00:33:47.020951 | orchestrator | Saturday 28 February 2026 00:33:28 +0000 (0:00:01.056) 0:06:56.152 ***** 2026-02-28 00:33:47.020961 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:47.020972 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:47.020983 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:47.020993 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:47.021003 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:47.021014 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:47.021024 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:47.021035 | orchestrator | 2026-02-28 00:33:47.021046 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-28 00:33:47.021056 | orchestrator | Saturday 28 February 2026 00:33:28 +0000 (0:00:00.552) 0:06:56.705 ***** 2026-02-28 00:33:47.021068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:33:47.021081 | orchestrator | 2026-02-28 00:33:47.021092 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-28 00:33:47.021125 | orchestrator | Saturday 28 February 2026 00:33:30 +0000 (0:00:01.135) 0:06:57.840 ***** 2026-02-28 00:33:47.021136 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:47.021147 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:47.021158 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:47.021169 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:47.021179 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:47.021190 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:47.021201 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:47.021212 | orchestrator | 2026-02-28 00:33:47.021223 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-28 00:33:47.021234 | orchestrator | Saturday 28 February 2026 00:33:30 +0000 (0:00:00.838) 0:06:58.678 ***** 2026-02-28 00:33:47.021245 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-28 00:33:47.021274 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-28 00:33:47.021287 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-28 00:33:47.021298 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-28 00:33:47.021308 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-28 00:33:47.021319 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-28 00:33:47.021329 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-28 00:33:47.021340 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-28 00:33:47.021351 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-28 00:33:47.021361 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-28 00:33:47.021372 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-28 00:33:47.021383 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-28 00:33:47.021402 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-28 00:33:47.021413 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-28 00:33:47.021423 | orchestrator | 2026-02-28 00:33:47.021434 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-28 00:33:47.021445 | orchestrator | Saturday 28 February 2026 00:33:33 +0000 (0:00:02.740) 0:07:01.419 ***** 2026-02-28 00:33:47.021456 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:47.021467 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:47.021478 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:47.021488 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:47.021499 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:47.021510 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:47.021520 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:47.021531 | orchestrator | 2026-02-28 00:33:47.021542 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-28 00:33:47.021553 | orchestrator | Saturday 28 February 2026 00:33:34 +0000 (0:00:00.533) 0:07:01.952 ***** 2026-02-28 00:33:47.021565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:33:47.021577 | orchestrator | 2026-02-28 00:33:47.021588 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-28 00:33:47.021599 | orchestrator | Saturday 28 February 2026 00:33:35 +0000 (0:00:00.898) 0:07:02.850 ***** 2026-02-28 00:33:47.021610 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:47.021620 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:47.021631 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:47.021642 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:47.021652 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:47.021663 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:47.021674 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:47.021684 | orchestrator | 2026-02-28 00:33:47.021695 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-28 00:33:47.021706 | orchestrator | Saturday 28 February 2026 00:33:35 +0000 (0:00:00.841) 0:07:03.692 ***** 2026-02-28 00:33:47.021722 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:47.021734 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:47.021744 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:47.021755 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:47.021765 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:47.021776 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:47.021786 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:47.021797 | orchestrator | 2026-02-28 00:33:47.021808 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-28 00:33:47.021819 | orchestrator | Saturday 28 February 2026 00:33:36 +0000 (0:00:01.100) 0:07:04.792 ***** 2026-02-28 00:33:47.021829 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:47.021840 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:47.021851 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:47.021861 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:47.021872 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:47.021882 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:47.021893 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:47.021904 | orchestrator | 2026-02-28 00:33:47.021915 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-28 00:33:47.021925 | orchestrator | Saturday 28 February 2026 00:33:37 +0000 (0:00:00.539) 0:07:05.332 ***** 2026-02-28 00:33:47.021936 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:47.021947 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:47.021958 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:47.021968 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:47.021979 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:47.021996 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:47.022006 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:47.022076 | orchestrator | 2026-02-28 00:33:47.022091 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-28 00:33:47.022121 | orchestrator | Saturday 28 February 2026 00:33:38 +0000 (0:00:01.439) 0:07:06.771 ***** 2026-02-28 00:33:47.022132 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:47.022143 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:47.022154 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:47.022165 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:47.022176 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:47.022186 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:47.022197 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:47.022208 | orchestrator | 2026-02-28 00:33:47.022219 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-28 00:33:47.022230 | orchestrator | Saturday 28 February 2026 00:33:39 +0000 (0:00:00.526) 0:07:07.298 ***** 2026-02-28 00:33:47.022241 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:47.022252 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:47.022262 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:47.022274 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:47.022284 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:47.022295 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:47.022314 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:20.440888 | orchestrator | 2026-02-28 00:34:20.441003 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-28 00:34:20.441020 | orchestrator | Saturday 28 February 2026 00:33:47 +0000 (0:00:07.543) 0:07:14.842 ***** 2026-02-28 00:34:20.441032 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:20.441045 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:20.441057 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:20.441068 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:20.441079 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:20.441090 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:20.441102 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:20.441153 | orchestrator | 2026-02-28 00:34:20.441174 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-28 00:34:20.441193 | orchestrator | Saturday 28 February 2026 00:33:48 +0000 (0:00:01.609) 0:07:16.452 ***** 2026-02-28 00:34:20.441212 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:20.441227 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:20.441239 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:20.441249 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:20.441260 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:20.441271 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:20.441282 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:20.441293 | orchestrator | 2026-02-28 00:34:20.441304 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-28 00:34:20.441316 | orchestrator | Saturday 28 February 2026 00:33:50 +0000 (0:00:01.741) 0:07:18.194 ***** 2026-02-28 00:34:20.441326 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:20.441337 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:20.441348 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:20.441359 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:20.441370 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:20.441381 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:20.441392 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:20.441402 | orchestrator | 2026-02-28 00:34:20.441413 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-28 00:34:20.441426 | orchestrator | Saturday 28 February 2026 00:33:52 +0000 (0:00:01.766) 0:07:19.960 ***** 2026-02-28 00:34:20.441438 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:20.441451 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:20.441463 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:20.441500 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:20.441571 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:20.441584 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:20.441596 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:20.441608 | orchestrator | 2026-02-28 00:34:20.441620 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-28 00:34:20.441632 | orchestrator | Saturday 28 February 2026 00:33:52 +0000 (0:00:00.852) 0:07:20.813 ***** 2026-02-28 00:34:20.441644 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:20.441657 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:20.441669 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:20.441682 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:20.441694 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:20.441706 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:20.441718 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:20.441730 | orchestrator | 2026-02-28 00:34:20.441742 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-28 00:34:20.441755 | orchestrator | Saturday 28 February 2026 00:33:53 +0000 (0:00:00.998) 0:07:21.811 ***** 2026-02-28 00:34:20.441767 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:20.441779 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:20.441791 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:20.441802 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:20.441812 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:20.441823 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:20.441834 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:20.441844 | orchestrator | 2026-02-28 00:34:20.441855 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-28 00:34:20.441866 | orchestrator | Saturday 28 February 2026 00:33:54 +0000 (0:00:00.513) 0:07:22.324 ***** 2026-02-28 00:34:20.441877 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:20.441907 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:20.441919 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:20.441930 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:20.441940 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:20.441951 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:20.441962 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:20.441972 | orchestrator | 2026-02-28 00:34:20.441984 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-28 00:34:20.441994 | orchestrator | Saturday 28 February 2026 00:33:55 +0000 (0:00:00.576) 0:07:22.900 ***** 2026-02-28 00:34:20.442005 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:20.442076 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:20.442090 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:20.442102 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:20.442146 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:20.442157 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:20.442168 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:20.442179 | orchestrator | 2026-02-28 00:34:20.442189 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-28 00:34:20.442201 | orchestrator | Saturday 28 February 2026 00:33:55 +0000 (0:00:00.711) 0:07:23.612 ***** 2026-02-28 00:34:20.442211 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:20.442222 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:20.442232 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:20.442243 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:20.442253 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:20.442264 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:20.442275 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:20.442285 | orchestrator | 2026-02-28 00:34:20.442296 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-28 00:34:20.442307 | orchestrator | Saturday 28 February 2026 00:33:56 +0000 (0:00:00.525) 0:07:24.138 ***** 2026-02-28 00:34:20.442317 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:20.442328 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:20.442349 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:20.442359 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:20.442370 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:20.442380 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:20.442391 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:20.442401 | orchestrator | 2026-02-28 00:34:20.442431 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-28 00:34:20.442443 | orchestrator | Saturday 28 February 2026 00:34:01 +0000 (0:00:05.566) 0:07:29.704 ***** 2026-02-28 00:34:20.442454 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:20.442464 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:20.442475 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:20.442486 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:20.442497 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:20.442507 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:20.442518 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:20.442529 | orchestrator | 2026-02-28 00:34:20.442540 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-28 00:34:20.442551 | orchestrator | Saturday 28 February 2026 00:34:02 +0000 (0:00:00.560) 0:07:30.265 ***** 2026-02-28 00:34:20.442564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:34:20.442577 | orchestrator | 2026-02-28 00:34:20.442588 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-28 00:34:20.442612 | orchestrator | Saturday 28 February 2026 00:34:03 +0000 (0:00:01.078) 0:07:31.343 ***** 2026-02-28 00:34:20.442623 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:20.442634 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:20.442645 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:20.442656 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:20.442666 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:20.442677 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:20.442687 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:20.442698 | orchestrator | 2026-02-28 00:34:20.442709 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-28 00:34:20.442720 | orchestrator | Saturday 28 February 2026 00:34:05 +0000 (0:00:01.946) 0:07:33.290 ***** 2026-02-28 00:34:20.442730 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:20.442741 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:20.442752 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:20.442762 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:20.442773 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:20.442783 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:20.442794 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:20.442805 | orchestrator | 2026-02-28 00:34:20.442815 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-28 00:34:20.442826 | orchestrator | Saturday 28 February 2026 00:34:06 +0000 (0:00:01.134) 0:07:34.424 ***** 2026-02-28 00:34:20.442837 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:20.442847 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:20.442858 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:20.442868 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:20.442899 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:20.442910 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:20.442921 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:20.442931 | orchestrator | 2026-02-28 00:34:20.442942 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-28 00:34:20.442953 | orchestrator | Saturday 28 February 2026 00:34:07 +0000 (0:00:00.861) 0:07:35.286 ***** 2026-02-28 00:34:20.442971 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:20.442983 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:20.443001 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:20.443012 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:20.443023 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:20.443034 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:20.443044 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:20.443055 | orchestrator | 2026-02-28 00:34:20.443066 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-28 00:34:20.443077 | orchestrator | Saturday 28 February 2026 00:34:09 +0000 (0:00:01.977) 0:07:37.264 ***** 2026-02-28 00:34:20.443088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:34:20.443099 | orchestrator | 2026-02-28 00:34:20.443126 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-28 00:34:20.443137 | orchestrator | Saturday 28 February 2026 00:34:10 +0000 (0:00:00.873) 0:07:38.138 ***** 2026-02-28 00:34:20.443148 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:20.443159 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:20.443170 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:20.443181 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:20.443191 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:20.443202 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:20.443213 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:20.443223 | orchestrator | 2026-02-28 00:34:20.443241 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-28 00:34:51.749437 | orchestrator | Saturday 28 February 2026 00:34:20 +0000 (0:00:10.116) 0:07:48.254 ***** 2026-02-28 00:34:51.749558 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:51.749584 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:51.749603 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:51.749621 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:51.749639 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:51.749658 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:51.749676 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:51.749692 | orchestrator | 2026-02-28 00:34:51.749704 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-28 00:34:51.749823 | orchestrator | Saturday 28 February 2026 00:34:22 +0000 (0:00:02.098) 0:07:50.352 ***** 2026-02-28 00:34:51.749838 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:51.749850 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:51.749861 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:51.749872 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:51.749883 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:51.749894 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:51.749905 | orchestrator | 2026-02-28 00:34:51.749916 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-28 00:34:51.749928 | orchestrator | Saturday 28 February 2026 00:34:23 +0000 (0:00:01.276) 0:07:51.629 ***** 2026-02-28 00:34:51.749939 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:51.749952 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:51.749965 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:51.749978 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:51.749990 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:51.750082 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:51.750096 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:51.750109 | orchestrator | 2026-02-28 00:34:51.750189 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-28 00:34:51.750203 | orchestrator | 2026-02-28 00:34:51.750215 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-28 00:34:51.750227 | orchestrator | Saturday 28 February 2026 00:34:25 +0000 (0:00:01.253) 0:07:52.883 ***** 2026-02-28 00:34:51.750240 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:51.750252 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:51.750264 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:51.750277 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:51.750290 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:51.750302 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:51.750313 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:51.750343 | orchestrator | 2026-02-28 00:34:51.750378 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-28 00:34:51.750390 | orchestrator | 2026-02-28 00:34:51.750401 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-28 00:34:51.750413 | orchestrator | Saturday 28 February 2026 00:34:25 +0000 (0:00:00.743) 0:07:53.626 ***** 2026-02-28 00:34:51.750424 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:51.750435 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:51.750458 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:51.750469 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:51.750480 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:51.750491 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:51.750502 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:51.750513 | orchestrator | 2026-02-28 00:34:51.750524 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-28 00:34:51.750559 | orchestrator | Saturday 28 February 2026 00:34:27 +0000 (0:00:01.392) 0:07:55.019 ***** 2026-02-28 00:34:51.750571 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:51.750582 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:51.750593 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:51.750604 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:51.750615 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:51.750625 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:51.750636 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:51.750647 | orchestrator | 2026-02-28 00:34:51.750658 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-28 00:34:51.750669 | orchestrator | Saturday 28 February 2026 00:34:28 +0000 (0:00:01.477) 0:07:56.497 ***** 2026-02-28 00:34:51.750680 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:51.750691 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:51.750702 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:51.750713 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:51.750724 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:51.750743 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:51.750762 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:51.750780 | orchestrator | 2026-02-28 00:34:51.750797 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-28 00:34:51.750815 | orchestrator | Saturday 28 February 2026 00:34:29 +0000 (0:00:00.531) 0:07:57.028 ***** 2026-02-28 00:34:51.750834 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:34:51.750853 | orchestrator | 2026-02-28 00:34:51.750868 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-28 00:34:51.750887 | orchestrator | Saturday 28 February 2026 00:34:30 +0000 (0:00:01.025) 0:07:58.053 ***** 2026-02-28 00:34:51.750908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:34:51.750960 | orchestrator | 2026-02-28 00:34:51.750980 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-28 00:34:51.750991 | orchestrator | Saturday 28 February 2026 00:34:31 +0000 (0:00:00.835) 0:07:58.889 ***** 2026-02-28 00:34:51.751002 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:51.751013 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:51.751024 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:51.751034 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:51.751045 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:51.751056 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:51.751067 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:51.751078 | orchestrator | 2026-02-28 00:34:51.751111 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-28 00:34:51.751147 | orchestrator | Saturday 28 February 2026 00:34:39 +0000 (0:00:08.862) 0:08:07.752 ***** 2026-02-28 00:34:51.751158 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:51.751169 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:51.751180 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:51.751191 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:51.751202 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:51.751212 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:51.751223 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:51.751234 | orchestrator | 2026-02-28 00:34:51.751245 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-28 00:34:51.751255 | orchestrator | Saturday 28 February 2026 00:34:40 +0000 (0:00:00.845) 0:08:08.597 ***** 2026-02-28 00:34:51.751266 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:51.751277 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:51.751287 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:51.751298 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:51.751309 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:51.751319 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:51.751330 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:51.751340 | orchestrator | 2026-02-28 00:34:51.751351 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-28 00:34:51.751362 | orchestrator | Saturday 28 February 2026 00:34:42 +0000 (0:00:01.344) 0:08:09.941 ***** 2026-02-28 00:34:51.751373 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:51.751384 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:51.751394 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:51.751405 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:51.751416 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:51.751426 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:51.751437 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:51.751448 | orchestrator | 2026-02-28 00:34:51.751458 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-28 00:34:51.751469 | orchestrator | Saturday 28 February 2026 00:34:44 +0000 (0:00:02.057) 0:08:11.999 ***** 2026-02-28 00:34:51.751480 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:51.751491 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:51.751502 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:51.751512 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:51.751523 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:51.751534 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:51.751544 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:51.751555 | orchestrator | 2026-02-28 00:34:51.751566 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-28 00:34:51.751577 | orchestrator | Saturday 28 February 2026 00:34:45 +0000 (0:00:01.247) 0:08:13.246 ***** 2026-02-28 00:34:51.751588 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:51.751598 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:51.751617 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:51.751628 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:51.751639 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:51.751649 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:51.751660 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:51.751670 | orchestrator | 2026-02-28 00:34:51.751681 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-28 00:34:51.751692 | orchestrator | 2026-02-28 00:34:51.751711 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-28 00:34:51.751722 | orchestrator | Saturday 28 February 2026 00:34:46 +0000 (0:00:01.126) 0:08:14.372 ***** 2026-02-28 00:34:51.751733 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:34:51.751744 | orchestrator | 2026-02-28 00:34:51.751755 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-28 00:34:51.751766 | orchestrator | Saturday 28 February 2026 00:34:47 +0000 (0:00:00.892) 0:08:15.265 ***** 2026-02-28 00:34:51.751776 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:51.751787 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:51.751798 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:51.751808 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:51.751819 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:51.751829 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:51.751840 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:51.751850 | orchestrator | 2026-02-28 00:34:51.751867 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-28 00:34:51.751894 | orchestrator | Saturday 28 February 2026 00:34:48 +0000 (0:00:01.111) 0:08:16.376 ***** 2026-02-28 00:34:51.751917 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:51.751935 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:51.751953 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:51.751970 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:51.751987 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:51.752005 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:51.752025 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:51.752046 | orchestrator | 2026-02-28 00:34:51.752064 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-28 00:34:51.752084 | orchestrator | Saturday 28 February 2026 00:34:49 +0000 (0:00:01.161) 0:08:17.538 ***** 2026-02-28 00:34:51.752096 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:34:51.752107 | orchestrator | 2026-02-28 00:34:51.752180 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-28 00:34:51.752194 | orchestrator | Saturday 28 February 2026 00:34:50 +0000 (0:00:01.167) 0:08:18.705 ***** 2026-02-28 00:34:51.752205 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:51.752216 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:51.752226 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:51.752237 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:51.752248 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:51.752258 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:51.752269 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:51.752279 | orchestrator | 2026-02-28 00:34:51.752302 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-28 00:34:53.392725 | orchestrator | Saturday 28 February 2026 00:34:51 +0000 (0:00:00.860) 0:08:19.566 ***** 2026-02-28 00:34:53.392825 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:53.392843 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:53.392855 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:53.392866 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:53.392877 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:53.392888 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:53.392899 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:53.392937 | orchestrator | 2026-02-28 00:34:53.392950 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:34:53.392964 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-28 00:34:53.392976 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-28 00:34:53.392987 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-28 00:34:53.392998 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-28 00:34:53.393009 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-28 00:34:53.393020 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-28 00:34:53.393031 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-28 00:34:53.393042 | orchestrator | 2026-02-28 00:34:53.393053 | orchestrator | 2026-02-28 00:34:53.393064 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:34:53.393075 | orchestrator | Saturday 28 February 2026 00:34:52 +0000 (0:00:01.118) 0:08:20.684 ***** 2026-02-28 00:34:53.393086 | orchestrator | =============================================================================== 2026-02-28 00:34:53.393097 | orchestrator | osism.commons.packages : Install required packages --------------------- 87.06s 2026-02-28 00:34:53.393108 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.37s 2026-02-28 00:34:53.393183 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.67s 2026-02-28 00:34:53.393194 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.95s 2026-02-28 00:34:53.393205 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 14.10s 2026-02-28 00:34:53.393232 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.70s 2026-02-28 00:34:53.393243 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.78s 2026-02-28 00:34:53.393255 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.12s 2026-02-28 00:34:53.393266 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.41s 2026-02-28 00:34:53.393279 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.98s 2026-02-28 00:34:53.393291 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.86s 2026-02-28 00:34:53.393303 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.47s 2026-02-28 00:34:53.393316 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.10s 2026-02-28 00:34:53.393328 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.76s 2026-02-28 00:34:53.393340 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.60s 2026-02-28 00:34:53.393352 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.54s 2026-02-28 00:34:53.393364 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.27s 2026-02-28 00:34:53.393376 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.92s 2026-02-28 00:34:53.393388 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.79s 2026-02-28 00:34:53.393400 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.73s 2026-02-28 00:34:53.747895 | orchestrator | + osism apply fail2ban 2026-02-28 00:35:06.682580 | orchestrator | 2026-02-28 00:35:06 | INFO  | Task 741b6393-9a40-484b-8c80-59e7af96debd (fail2ban) was prepared for execution. 2026-02-28 00:35:06.682694 | orchestrator | 2026-02-28 00:35:06 | INFO  | It takes a moment until task 741b6393-9a40-484b-8c80-59e7af96debd (fail2ban) has been started and output is visible here. 2026-02-28 00:35:28.553193 | orchestrator | 2026-02-28 00:35:28.553302 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-28 00:35:28.553319 | orchestrator | 2026-02-28 00:35:28.553331 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-28 00:35:28.553343 | orchestrator | Saturday 28 February 2026 00:35:11 +0000 (0:00:00.288) 0:00:00.288 ***** 2026-02-28 00:35:28.553357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:35:28.553370 | orchestrator | 2026-02-28 00:35:28.553381 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-28 00:35:28.553392 | orchestrator | Saturday 28 February 2026 00:35:12 +0000 (0:00:01.271) 0:00:01.559 ***** 2026-02-28 00:35:28.553403 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:28.553415 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:28.553427 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:28.553438 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:28.553449 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:28.553459 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:28.553470 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:28.553482 | orchestrator | 2026-02-28 00:35:28.553493 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-28 00:35:28.553504 | orchestrator | Saturday 28 February 2026 00:35:23 +0000 (0:00:10.779) 0:00:12.339 ***** 2026-02-28 00:35:28.553515 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:28.553527 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:28.553537 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:28.553548 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:28.553559 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:28.553570 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:28.553581 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:28.553592 | orchestrator | 2026-02-28 00:35:28.553603 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-28 00:35:28.553614 | orchestrator | Saturday 28 February 2026 00:35:25 +0000 (0:00:01.466) 0:00:13.806 ***** 2026-02-28 00:35:28.553625 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:35:28.553637 | orchestrator | ok: [testbed-manager] 2026-02-28 00:35:28.553648 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:35:28.553659 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:35:28.553670 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:35:28.553680 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:35:28.553691 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:35:28.553702 | orchestrator | 2026-02-28 00:35:28.553715 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-28 00:35:28.553727 | orchestrator | Saturday 28 February 2026 00:35:26 +0000 (0:00:01.489) 0:00:15.295 ***** 2026-02-28 00:35:28.553739 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:28.553752 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:28.553765 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:28.553777 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:28.553798 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:28.553819 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:28.553839 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:28.553859 | orchestrator | 2026-02-28 00:35:28.553880 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:35:28.553899 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:28.553955 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:28.553978 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:28.554000 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:28.554160 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:28.554189 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:28.554208 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:28.554227 | orchestrator | 2026-02-28 00:35:28.554246 | orchestrator | 2026-02-28 00:35:28.554265 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:35:28.554284 | orchestrator | Saturday 28 February 2026 00:35:28 +0000 (0:00:01.610) 0:00:16.906 ***** 2026-02-28 00:35:28.554304 | orchestrator | =============================================================================== 2026-02-28 00:35:28.554323 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.78s 2026-02-28 00:35:28.554342 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.61s 2026-02-28 00:35:28.554360 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.49s 2026-02-28 00:35:28.554379 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.47s 2026-02-28 00:35:28.554398 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.27s 2026-02-28 00:35:28.885505 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-28 00:35:28.885602 | orchestrator | + osism apply network 2026-02-28 00:35:40.997476 | orchestrator | 2026-02-28 00:35:40 | INFO  | Task 23e500c9-f2d0-4e1d-b0da-868f2a252359 (network) was prepared for execution. 2026-02-28 00:35:40.997588 | orchestrator | 2026-02-28 00:35:40 | INFO  | It takes a moment until task 23e500c9-f2d0-4e1d-b0da-868f2a252359 (network) has been started and output is visible here. 2026-02-28 00:36:09.739024 | orchestrator | 2026-02-28 00:36:09.739200 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-28 00:36:09.739229 | orchestrator | 2026-02-28 00:36:09.740163 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-28 00:36:09.740264 | orchestrator | Saturday 28 February 2026 00:35:45 +0000 (0:00:00.298) 0:00:00.298 ***** 2026-02-28 00:36:09.740285 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:09.740304 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:09.740322 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:09.740339 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:09.740356 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:09.740373 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:09.740390 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:09.740407 | orchestrator | 2026-02-28 00:36:09.740425 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-28 00:36:09.740442 | orchestrator | Saturday 28 February 2026 00:35:46 +0000 (0:00:00.744) 0:00:01.042 ***** 2026-02-28 00:36:09.740462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:36:09.740484 | orchestrator | 2026-02-28 00:36:09.740501 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-28 00:36:09.740552 | orchestrator | Saturday 28 February 2026 00:35:47 +0000 (0:00:01.309) 0:00:02.352 ***** 2026-02-28 00:36:09.740571 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:09.740589 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:09.740606 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:09.740624 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:09.740643 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:09.740662 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:09.740678 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:09.740693 | orchestrator | 2026-02-28 00:36:09.740708 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-28 00:36:09.740723 | orchestrator | Saturday 28 February 2026 00:35:49 +0000 (0:00:02.045) 0:00:04.397 ***** 2026-02-28 00:36:09.740738 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:09.740753 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:09.740768 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:09.740784 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:09.740798 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:09.740813 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:09.740827 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:09.740842 | orchestrator | 2026-02-28 00:36:09.740856 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-28 00:36:09.740873 | orchestrator | Saturday 28 February 2026 00:35:51 +0000 (0:00:01.861) 0:00:06.258 ***** 2026-02-28 00:36:09.740889 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-28 00:36:09.740905 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-28 00:36:09.740920 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-28 00:36:09.740936 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-28 00:36:09.740952 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-28 00:36:09.740967 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-28 00:36:09.740983 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-28 00:36:09.740998 | orchestrator | 2026-02-28 00:36:09.741032 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-28 00:36:09.741052 | orchestrator | Saturday 28 February 2026 00:35:52 +0000 (0:00:00.974) 0:00:07.233 ***** 2026-02-28 00:36:09.741067 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 00:36:09.741083 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-28 00:36:09.741100 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-28 00:36:09.741115 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:36:09.741131 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-28 00:36:09.741166 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 00:36:09.741181 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-28 00:36:09.741194 | orchestrator | 2026-02-28 00:36:09.741207 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-28 00:36:09.741222 | orchestrator | Saturday 28 February 2026 00:35:55 +0000 (0:00:03.359) 0:00:10.592 ***** 2026-02-28 00:36:09.741236 | orchestrator | changed: [testbed-manager] 2026-02-28 00:36:09.741249 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:36:09.741263 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:36:09.741276 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:36:09.741289 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:36:09.741303 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:36:09.741315 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:36:09.741327 | orchestrator | 2026-02-28 00:36:09.741340 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-28 00:36:09.741354 | orchestrator | Saturday 28 February 2026 00:35:57 +0000 (0:00:01.715) 0:00:12.307 ***** 2026-02-28 00:36:09.741368 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-28 00:36:09.741381 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:36:09.741395 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 00:36:09.741409 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 00:36:09.741433 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-28 00:36:09.741447 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-28 00:36:09.741462 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-28 00:36:09.741475 | orchestrator | 2026-02-28 00:36:09.741490 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-28 00:36:09.741504 | orchestrator | Saturday 28 February 2026 00:35:59 +0000 (0:00:01.806) 0:00:14.113 ***** 2026-02-28 00:36:09.741518 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:09.741532 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:09.741546 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:09.741559 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:09.741574 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:09.741588 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:09.741601 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:09.741615 | orchestrator | 2026-02-28 00:36:09.741629 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-28 00:36:09.741670 | orchestrator | Saturday 28 February 2026 00:36:00 +0000 (0:00:01.109) 0:00:15.223 ***** 2026-02-28 00:36:09.741684 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:36:09.741697 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:09.741711 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:09.741725 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:09.741739 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:09.741753 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:09.741767 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:09.741781 | orchestrator | 2026-02-28 00:36:09.741794 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-28 00:36:09.741807 | orchestrator | Saturday 28 February 2026 00:36:01 +0000 (0:00:00.678) 0:00:15.902 ***** 2026-02-28 00:36:09.741821 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:09.741835 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:09.741848 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:09.741862 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:09.741876 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:09.741889 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:09.741903 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:09.741916 | orchestrator | 2026-02-28 00:36:09.741930 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-28 00:36:09.741945 | orchestrator | Saturday 28 February 2026 00:36:03 +0000 (0:00:01.942) 0:00:17.844 ***** 2026-02-28 00:36:09.741959 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:09.741972 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:09.741986 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:09.742000 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:09.742013 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:09.742089 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:09.742105 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-28 00:36:09.742122 | orchestrator | 2026-02-28 00:36:09.742181 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-28 00:36:09.742196 | orchestrator | Saturday 28 February 2026 00:36:03 +0000 (0:00:00.891) 0:00:18.736 ***** 2026-02-28 00:36:09.742210 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:09.742225 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:36:09.742238 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:36:09.742252 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:36:09.742266 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:36:09.742280 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:36:09.742294 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:36:09.742307 | orchestrator | 2026-02-28 00:36:09.742321 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-28 00:36:09.742335 | orchestrator | Saturday 28 February 2026 00:36:05 +0000 (0:00:01.558) 0:00:20.294 ***** 2026-02-28 00:36:09.742350 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:36:09.742376 | orchestrator | 2026-02-28 00:36:09.742390 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-28 00:36:09.742403 | orchestrator | Saturday 28 February 2026 00:36:06 +0000 (0:00:01.273) 0:00:21.567 ***** 2026-02-28 00:36:09.742417 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:09.742431 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:09.742445 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:09.742460 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:09.742479 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:09.742494 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:09.742507 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:09.742521 | orchestrator | 2026-02-28 00:36:09.742535 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-28 00:36:09.742549 | orchestrator | Saturday 28 February 2026 00:36:07 +0000 (0:00:00.951) 0:00:22.518 ***** 2026-02-28 00:36:09.742562 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:09.742576 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:09.742590 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:09.742603 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:09.742617 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:09.742632 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:09.742646 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:09.742691 | orchestrator | 2026-02-28 00:36:09.742705 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-28 00:36:09.742719 | orchestrator | Saturday 28 February 2026 00:36:08 +0000 (0:00:00.840) 0:00:23.359 ***** 2026-02-28 00:36:09.742733 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:36:09.742749 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:36:09.742762 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:36:09.742776 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:36:09.742790 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:36:09.742803 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:36:09.742817 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:36:09.742831 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:36:09.742844 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:36:09.742858 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:36:09.742871 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:36:09.742885 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:36:09.742899 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:36:09.742912 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:36:09.742926 | orchestrator | 2026-02-28 00:36:09.742950 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-28 00:36:27.741498 | orchestrator | Saturday 28 February 2026 00:36:09 +0000 (0:00:01.189) 0:00:24.549 ***** 2026-02-28 00:36:27.741611 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:36:27.741629 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:27.741641 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:27.741652 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:27.741663 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:27.741674 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:27.741685 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:27.741696 | orchestrator | 2026-02-28 00:36:27.741734 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-28 00:36:27.741746 | orchestrator | Saturday 28 February 2026 00:36:10 +0000 (0:00:00.625) 0:00:25.174 ***** 2026-02-28 00:36:27.741759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-4, testbed-node-1, testbed-manager, testbed-node-0, testbed-node-3, testbed-node-2, testbed-node-5 2026-02-28 00:36:27.741773 | orchestrator | 2026-02-28 00:36:27.741784 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-28 00:36:27.741796 | orchestrator | Saturday 28 February 2026 00:36:15 +0000 (0:00:04.838) 0:00:30.013 ***** 2026-02-28 00:36:27.741808 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:27.741823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:27.741835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:27.741847 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:27.741858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:27.741884 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:27.741896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:27.741907 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:27.741919 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:27.741937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:27.741948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:27.741979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:27.742001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:27.742012 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:27.742087 | orchestrator | 2026-02-28 00:36:27.742102 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-28 00:36:27.742114 | orchestrator | Saturday 28 February 2026 00:36:21 +0000 (0:00:06.095) 0:00:36.108 ***** 2026-02-28 00:36:27.742127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:27.742167 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:27.742181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:27.742194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:27.742207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:27.742226 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:27.742239 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:27.742252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:27.742264 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:27.742277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:27.742290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:27.742310 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:27.742332 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:35.239332 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:35.239445 | orchestrator | 2026-02-28 00:36:35.239462 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-28 00:36:35.239475 | orchestrator | Saturday 28 February 2026 00:36:27 +0000 (0:00:06.438) 0:00:42.547 ***** 2026-02-28 00:36:35.239488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:36:35.239500 | orchestrator | 2026-02-28 00:36:35.239512 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-28 00:36:35.239524 | orchestrator | Saturday 28 February 2026 00:36:29 +0000 (0:00:01.609) 0:00:44.156 ***** 2026-02-28 00:36:35.239535 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:35.239547 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:35.239558 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:35.239568 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:35.239579 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:35.239590 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:35.239600 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:35.239611 | orchestrator | 2026-02-28 00:36:35.239622 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-28 00:36:35.239634 | orchestrator | Saturday 28 February 2026 00:36:30 +0000 (0:00:01.415) 0:00:45.572 ***** 2026-02-28 00:36:35.239645 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:36:35.239656 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:36:35.239667 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:36:35.239679 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:36:35.239690 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:36:35.239701 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:36:35.239712 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:36:35.239723 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:36:35.239734 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:36:35.239745 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:35.239757 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:36:35.239784 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:36:35.239796 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:36:35.239807 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:36:35.239839 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:35.239850 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:36:35.239861 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:36:35.239875 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:36:35.239887 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:36:35.239900 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:35.239912 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:36:35.239925 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:36:35.239937 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:36:35.239949 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:36:35.239961 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:35.239973 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:36:35.239985 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:36:35.239997 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:36:35.240010 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:36:35.240020 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:35.240031 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:36:35.240042 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:36:35.240052 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:36:35.240063 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:36:35.240074 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:35.240085 | orchestrator | 2026-02-28 00:36:35.240096 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-28 00:36:35.240124 | orchestrator | Saturday 28 February 2026 00:36:33 +0000 (0:00:02.500) 0:00:48.073 ***** 2026-02-28 00:36:35.240136 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:36:35.240196 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:35.240207 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:35.240218 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:35.240229 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:35.240239 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:35.240250 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:35.240260 | orchestrator | 2026-02-28 00:36:35.240271 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-28 00:36:35.240282 | orchestrator | Saturday 28 February 2026 00:36:33 +0000 (0:00:00.751) 0:00:48.825 ***** 2026-02-28 00:36:35.240293 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:36:35.240304 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:35.240314 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:35.240325 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:35.240336 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:35.240346 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:35.240357 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:35.240368 | orchestrator | 2026-02-28 00:36:35.240378 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:36:35.240391 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 00:36:35.240403 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:36:35.240422 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:36:35.240433 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:36:35.240444 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:36:35.240455 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:36:35.240465 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 00:36:35.240476 | orchestrator | 2026-02-28 00:36:35.240487 | orchestrator | 2026-02-28 00:36:35.240498 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:36:35.240509 | orchestrator | Saturday 28 February 2026 00:36:34 +0000 (0:00:00.796) 0:00:49.621 ***** 2026-02-28 00:36:35.240526 | orchestrator | =============================================================================== 2026-02-28 00:36:35.240537 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.44s 2026-02-28 00:36:35.240548 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.10s 2026-02-28 00:36:35.240558 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.84s 2026-02-28 00:36:35.240569 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.36s 2026-02-28 00:36:35.240580 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.50s 2026-02-28 00:36:35.240590 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.05s 2026-02-28 00:36:35.240601 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.94s 2026-02-28 00:36:35.240612 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.86s 2026-02-28 00:36:35.240622 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.81s 2026-02-28 00:36:35.240647 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.72s 2026-02-28 00:36:35.240658 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.61s 2026-02-28 00:36:35.240680 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.56s 2026-02-28 00:36:35.240691 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.42s 2026-02-28 00:36:35.240702 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.31s 2026-02-28 00:36:35.240713 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.27s 2026-02-28 00:36:35.240723 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.19s 2026-02-28 00:36:35.240734 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.11s 2026-02-28 00:36:35.240745 | orchestrator | osism.commons.network : Create required directories --------------------- 0.97s 2026-02-28 00:36:35.240756 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.95s 2026-02-28 00:36:35.240766 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.89s 2026-02-28 00:36:35.588451 | orchestrator | + osism apply wireguard 2026-02-28 00:36:47.844520 | orchestrator | 2026-02-28 00:36:47 | INFO  | Task 4023ff21-89a0-4eea-9361-97255b8c196e (wireguard) was prepared for execution. 2026-02-28 00:36:47.844620 | orchestrator | 2026-02-28 00:36:47 | INFO  | It takes a moment until task 4023ff21-89a0-4eea-9361-97255b8c196e (wireguard) has been started and output is visible here. 2026-02-28 00:37:08.234334 | orchestrator | 2026-02-28 00:37:08.234455 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-28 00:37:08.234495 | orchestrator | 2026-02-28 00:37:08.234506 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-28 00:37:08.272056 | orchestrator | Saturday 28 February 2026 00:36:52 +0000 (0:00:00.228) 0:00:00.228 ***** 2026-02-28 00:37:08.272145 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:08.272213 | orchestrator | 2026-02-28 00:37:08.272227 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-28 00:37:08.272239 | orchestrator | Saturday 28 February 2026 00:36:53 +0000 (0:00:01.546) 0:00:01.775 ***** 2026-02-28 00:37:08.272250 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:08.272267 | orchestrator | 2026-02-28 00:37:08.272279 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-28 00:37:08.272290 | orchestrator | Saturday 28 February 2026 00:37:00 +0000 (0:00:06.638) 0:00:08.414 ***** 2026-02-28 00:37:08.272301 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:08.272312 | orchestrator | 2026-02-28 00:37:08.272323 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-28 00:37:08.272335 | orchestrator | Saturday 28 February 2026 00:37:00 +0000 (0:00:00.558) 0:00:08.972 ***** 2026-02-28 00:37:08.272346 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:08.272356 | orchestrator | 2026-02-28 00:37:08.272367 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-28 00:37:08.272379 | orchestrator | Saturday 28 February 2026 00:37:01 +0000 (0:00:00.470) 0:00:09.443 ***** 2026-02-28 00:37:08.272390 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:08.272400 | orchestrator | 2026-02-28 00:37:08.272411 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-28 00:37:08.272422 | orchestrator | Saturday 28 February 2026 00:37:02 +0000 (0:00:00.700) 0:00:10.143 ***** 2026-02-28 00:37:08.272433 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:08.272444 | orchestrator | 2026-02-28 00:37:08.272455 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-28 00:37:08.272466 | orchestrator | Saturday 28 February 2026 00:37:02 +0000 (0:00:00.439) 0:00:10.583 ***** 2026-02-28 00:37:08.272476 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:08.272487 | orchestrator | 2026-02-28 00:37:08.272498 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-28 00:37:08.272509 | orchestrator | Saturday 28 February 2026 00:37:02 +0000 (0:00:00.439) 0:00:11.022 ***** 2026-02-28 00:37:08.272520 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:08.272531 | orchestrator | 2026-02-28 00:37:08.272542 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-28 00:37:08.272553 | orchestrator | Saturday 28 February 2026 00:37:04 +0000 (0:00:01.251) 0:00:12.274 ***** 2026-02-28 00:37:08.272564 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:37:08.272575 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:08.272585 | orchestrator | 2026-02-28 00:37:08.272596 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-28 00:37:08.272607 | orchestrator | Saturday 28 February 2026 00:37:05 +0000 (0:00:00.940) 0:00:13.214 ***** 2026-02-28 00:37:08.272618 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:08.272630 | orchestrator | 2026-02-28 00:37:08.272641 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-28 00:37:08.272652 | orchestrator | Saturday 28 February 2026 00:37:06 +0000 (0:00:01.732) 0:00:14.947 ***** 2026-02-28 00:37:08.272663 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:08.272674 | orchestrator | 2026-02-28 00:37:08.272685 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:37:08.272697 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:37:08.272709 | orchestrator | 2026-02-28 00:37:08.272720 | orchestrator | 2026-02-28 00:37:08.272731 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:37:08.272769 | orchestrator | Saturday 28 February 2026 00:37:07 +0000 (0:00:00.994) 0:00:15.942 ***** 2026-02-28 00:37:08.272781 | orchestrator | =============================================================================== 2026-02-28 00:37:08.272791 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.64s 2026-02-28 00:37:08.272802 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.73s 2026-02-28 00:37:08.272813 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.55s 2026-02-28 00:37:08.272824 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.25s 2026-02-28 00:37:08.272834 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.99s 2026-02-28 00:37:08.272845 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2026-02-28 00:37:08.272856 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.70s 2026-02-28 00:37:08.272866 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2026-02-28 00:37:08.272877 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.47s 2026-02-28 00:37:08.272888 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.44s 2026-02-28 00:37:08.272899 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2026-02-28 00:37:08.526999 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-28 00:37:08.562689 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-28 00:37:08.562778 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-28 00:37:08.636112 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 204 0 --:--:-- --:--:-- --:--:-- 205 2026-02-28 00:37:08.648770 | orchestrator | + osism apply --environment custom workarounds 2026-02-28 00:37:10.552588 | orchestrator | 2026-02-28 00:37:10 | INFO  | Trying to run play workarounds in environment custom 2026-02-28 00:37:20.640244 | orchestrator | 2026-02-28 00:37:20 | INFO  | Task d3d724d7-e425-4927-8db4-3e671455d36a (workarounds) was prepared for execution. 2026-02-28 00:37:20.640356 | orchestrator | 2026-02-28 00:37:20 | INFO  | It takes a moment until task d3d724d7-e425-4927-8db4-3e671455d36a (workarounds) has been started and output is visible here. 2026-02-28 00:37:45.793333 | orchestrator | 2026-02-28 00:37:45.793457 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:37:45.793483 | orchestrator | 2026-02-28 00:37:45.793502 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-28 00:37:45.793521 | orchestrator | Saturday 28 February 2026 00:37:24 +0000 (0:00:00.114) 0:00:00.114 ***** 2026-02-28 00:37:45.793540 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-28 00:37:45.793559 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-28 00:37:45.793577 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-28 00:37:45.793596 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-28 00:37:45.793615 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-28 00:37:45.793634 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-28 00:37:45.793646 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-28 00:37:45.793657 | orchestrator | 2026-02-28 00:37:45.793668 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-28 00:37:45.793679 | orchestrator | 2026-02-28 00:37:45.793690 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-28 00:37:45.793701 | orchestrator | Saturday 28 February 2026 00:37:25 +0000 (0:00:00.724) 0:00:00.839 ***** 2026-02-28 00:37:45.793712 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:45.793750 | orchestrator | 2026-02-28 00:37:45.793762 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-28 00:37:45.793772 | orchestrator | 2026-02-28 00:37:45.793784 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-28 00:37:45.793795 | orchestrator | Saturday 28 February 2026 00:37:27 +0000 (0:00:02.535) 0:00:03.374 ***** 2026-02-28 00:37:45.793806 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:37:45.793816 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:37:45.793827 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:37:45.793838 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:37:45.793850 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:37:45.793863 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:37:45.793874 | orchestrator | 2026-02-28 00:37:45.793887 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-28 00:37:45.793899 | orchestrator | 2026-02-28 00:37:45.793911 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-28 00:37:45.793940 | orchestrator | Saturday 28 February 2026 00:37:29 +0000 (0:00:01.863) 0:00:05.237 ***** 2026-02-28 00:37:45.793952 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:37:45.793964 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:37:45.793974 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:37:45.793985 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:37:45.793996 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:37:45.794007 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:37:45.794065 | orchestrator | 2026-02-28 00:37:45.794079 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-28 00:37:45.794092 | orchestrator | Saturday 28 February 2026 00:37:31 +0000 (0:00:01.516) 0:00:06.754 ***** 2026-02-28 00:37:45.794112 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:37:45.794130 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:37:45.794146 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:37:45.794188 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:37:45.794204 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:37:45.794222 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:37:45.794240 | orchestrator | 2026-02-28 00:37:45.794257 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-28 00:37:45.794273 | orchestrator | Saturday 28 February 2026 00:37:35 +0000 (0:00:03.799) 0:00:10.554 ***** 2026-02-28 00:37:45.794290 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:37:45.794309 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:37:45.794325 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:37:45.794343 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:37:45.794361 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:37:45.794379 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:37:45.794395 | orchestrator | 2026-02-28 00:37:45.794413 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-28 00:37:45.794432 | orchestrator | 2026-02-28 00:37:45.794451 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-28 00:37:45.794470 | orchestrator | Saturday 28 February 2026 00:37:35 +0000 (0:00:00.779) 0:00:11.333 ***** 2026-02-28 00:37:45.794482 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:37:45.794493 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:37:45.794504 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:37:45.794514 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:37:45.794525 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:37:45.794536 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:37:45.794560 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:45.794571 | orchestrator | 2026-02-28 00:37:45.794582 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-28 00:37:45.794593 | orchestrator | Saturday 28 February 2026 00:37:37 +0000 (0:00:01.581) 0:00:12.915 ***** 2026-02-28 00:37:45.794603 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:37:45.794614 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:37:45.794625 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:37:45.794635 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:37:45.794646 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:37:45.794657 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:37:45.794689 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:45.794701 | orchestrator | 2026-02-28 00:37:45.794711 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-28 00:37:45.794722 | orchestrator | Saturday 28 February 2026 00:37:38 +0000 (0:00:01.552) 0:00:14.468 ***** 2026-02-28 00:37:45.794733 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:37:45.794743 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:37:45.794754 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:37:45.794764 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:37:45.794775 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:37:45.794786 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:37:45.794796 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:45.794807 | orchestrator | 2026-02-28 00:37:45.794818 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-28 00:37:45.794828 | orchestrator | Saturday 28 February 2026 00:37:40 +0000 (0:00:01.608) 0:00:16.076 ***** 2026-02-28 00:37:45.794839 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:37:45.794850 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:37:45.794860 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:37:45.794871 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:37:45.794881 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:37:45.794891 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:37:45.794902 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:45.794912 | orchestrator | 2026-02-28 00:37:45.794923 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-28 00:37:45.794934 | orchestrator | Saturday 28 February 2026 00:37:42 +0000 (0:00:01.886) 0:00:17.963 ***** 2026-02-28 00:37:45.794945 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:37:45.794955 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:37:45.794966 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:37:45.794976 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:37:45.794987 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:37:45.794997 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:37:45.795008 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:37:45.795018 | orchestrator | 2026-02-28 00:37:45.795029 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-28 00:37:45.795040 | orchestrator | 2026-02-28 00:37:45.795050 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-28 00:37:45.795061 | orchestrator | Saturday 28 February 2026 00:37:43 +0000 (0:00:00.652) 0:00:18.615 ***** 2026-02-28 00:37:45.795072 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:37:45.795083 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:37:45.795093 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:37:45.795104 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:37:45.795114 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:37:45.795133 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:37:45.795144 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:45.795211 | orchestrator | 2026-02-28 00:37:45.795223 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:37:45.795236 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:37:45.795248 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:45.795268 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:45.795279 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:45.795289 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:45.795300 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:45.795311 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:45.795322 | orchestrator | 2026-02-28 00:37:45.795332 | orchestrator | 2026-02-28 00:37:45.795343 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:37:45.795354 | orchestrator | Saturday 28 February 2026 00:37:45 +0000 (0:00:02.663) 0:00:21.278 ***** 2026-02-28 00:37:45.795365 | orchestrator | =============================================================================== 2026-02-28 00:37:45.795375 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.80s 2026-02-28 00:37:45.795386 | orchestrator | Install python3-docker -------------------------------------------------- 2.66s 2026-02-28 00:37:45.795397 | orchestrator | Apply netplan configuration --------------------------------------------- 2.54s 2026-02-28 00:37:45.795408 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.89s 2026-02-28 00:37:45.795419 | orchestrator | Apply netplan configuration --------------------------------------------- 1.86s 2026-02-28 00:37:45.795430 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.61s 2026-02-28 00:37:45.795440 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.58s 2026-02-28 00:37:45.795451 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.55s 2026-02-28 00:37:45.795462 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.52s 2026-02-28 00:37:45.795472 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.78s 2026-02-28 00:37:45.795483 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.72s 2026-02-28 00:37:45.795501 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.65s 2026-02-28 00:37:46.428316 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-28 00:37:58.483531 | orchestrator | 2026-02-28 00:37:58 | INFO  | Task c06f4efd-96d9-4a8d-9276-d253aa0c7d64 (reboot) was prepared for execution. 2026-02-28 00:37:58.483626 | orchestrator | 2026-02-28 00:37:58 | INFO  | It takes a moment until task c06f4efd-96d9-4a8d-9276-d253aa0c7d64 (reboot) has been started and output is visible here. 2026-02-28 00:38:09.370494 | orchestrator | 2026-02-28 00:38:09.370625 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:38:09.370651 | orchestrator | 2026-02-28 00:38:09.370668 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:38:09.370686 | orchestrator | Saturday 28 February 2026 00:38:02 +0000 (0:00:00.222) 0:00:00.222 ***** 2026-02-28 00:38:09.370702 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:38:09.370718 | orchestrator | 2026-02-28 00:38:09.370734 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:38:09.370750 | orchestrator | Saturday 28 February 2026 00:38:03 +0000 (0:00:00.113) 0:00:00.336 ***** 2026-02-28 00:38:09.370767 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:38:09.370785 | orchestrator | 2026-02-28 00:38:09.370801 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:38:09.370847 | orchestrator | Saturday 28 February 2026 00:38:04 +0000 (0:00:01.042) 0:00:01.378 ***** 2026-02-28 00:38:09.370865 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:38:09.370882 | orchestrator | 2026-02-28 00:38:09.370899 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:38:09.370915 | orchestrator | 2026-02-28 00:38:09.370933 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:38:09.370950 | orchestrator | Saturday 28 February 2026 00:38:04 +0000 (0:00:00.137) 0:00:01.516 ***** 2026-02-28 00:38:09.370966 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:38:09.370983 | orchestrator | 2026-02-28 00:38:09.370999 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:38:09.371016 | orchestrator | Saturday 28 February 2026 00:38:04 +0000 (0:00:00.118) 0:00:01.635 ***** 2026-02-28 00:38:09.371034 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:38:09.371052 | orchestrator | 2026-02-28 00:38:09.371069 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:38:09.371103 | orchestrator | Saturday 28 February 2026 00:38:05 +0000 (0:00:00.731) 0:00:02.366 ***** 2026-02-28 00:38:09.371121 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:38:09.371139 | orchestrator | 2026-02-28 00:38:09.371224 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:38:09.371243 | orchestrator | 2026-02-28 00:38:09.371258 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:38:09.371274 | orchestrator | Saturday 28 February 2026 00:38:05 +0000 (0:00:00.117) 0:00:02.484 ***** 2026-02-28 00:38:09.371291 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:38:09.371307 | orchestrator | 2026-02-28 00:38:09.371323 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:38:09.371339 | orchestrator | Saturday 28 February 2026 00:38:05 +0000 (0:00:00.235) 0:00:02.720 ***** 2026-02-28 00:38:09.371356 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:38:09.371373 | orchestrator | 2026-02-28 00:38:09.371387 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:38:09.371397 | orchestrator | Saturday 28 February 2026 00:38:06 +0000 (0:00:00.708) 0:00:03.428 ***** 2026-02-28 00:38:09.371407 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:38:09.371417 | orchestrator | 2026-02-28 00:38:09.371426 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:38:09.371436 | orchestrator | 2026-02-28 00:38:09.371446 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:38:09.371455 | orchestrator | Saturday 28 February 2026 00:38:06 +0000 (0:00:00.150) 0:00:03.579 ***** 2026-02-28 00:38:09.371465 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:38:09.371474 | orchestrator | 2026-02-28 00:38:09.371484 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:38:09.371494 | orchestrator | Saturday 28 February 2026 00:38:06 +0000 (0:00:00.102) 0:00:03.681 ***** 2026-02-28 00:38:09.371503 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:38:09.371513 | orchestrator | 2026-02-28 00:38:09.371523 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:38:09.371532 | orchestrator | Saturday 28 February 2026 00:38:07 +0000 (0:00:00.696) 0:00:04.378 ***** 2026-02-28 00:38:09.371542 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:38:09.371551 | orchestrator | 2026-02-28 00:38:09.371561 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:38:09.371570 | orchestrator | 2026-02-28 00:38:09.371580 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:38:09.371589 | orchestrator | Saturday 28 February 2026 00:38:07 +0000 (0:00:00.132) 0:00:04.511 ***** 2026-02-28 00:38:09.371599 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:38:09.371609 | orchestrator | 2026-02-28 00:38:09.371618 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:38:09.371640 | orchestrator | Saturday 28 February 2026 00:38:07 +0000 (0:00:00.124) 0:00:04.635 ***** 2026-02-28 00:38:09.371649 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:38:09.371659 | orchestrator | 2026-02-28 00:38:09.371669 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:38:09.371678 | orchestrator | Saturday 28 February 2026 00:38:07 +0000 (0:00:00.653) 0:00:05.289 ***** 2026-02-28 00:38:09.371688 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:38:09.371698 | orchestrator | 2026-02-28 00:38:09.371708 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:38:09.371717 | orchestrator | 2026-02-28 00:38:09.371727 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:38:09.371737 | orchestrator | Saturday 28 February 2026 00:38:08 +0000 (0:00:00.127) 0:00:05.416 ***** 2026-02-28 00:38:09.371747 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:38:09.371756 | orchestrator | 2026-02-28 00:38:09.371766 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:38:09.371776 | orchestrator | Saturday 28 February 2026 00:38:08 +0000 (0:00:00.107) 0:00:05.524 ***** 2026-02-28 00:38:09.371785 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:38:09.371795 | orchestrator | 2026-02-28 00:38:09.371804 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:38:09.371814 | orchestrator | Saturday 28 February 2026 00:38:08 +0000 (0:00:00.710) 0:00:06.235 ***** 2026-02-28 00:38:09.371845 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:38:09.371856 | orchestrator | 2026-02-28 00:38:09.371866 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:38:09.371876 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:09.371887 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:09.371897 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:09.371906 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:09.371916 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:09.371926 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:38:09.371935 | orchestrator | 2026-02-28 00:38:09.371945 | orchestrator | 2026-02-28 00:38:09.371954 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:38:09.371964 | orchestrator | Saturday 28 February 2026 00:38:08 +0000 (0:00:00.029) 0:00:06.265 ***** 2026-02-28 00:38:09.371982 | orchestrator | =============================================================================== 2026-02-28 00:38:09.371992 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.54s 2026-02-28 00:38:09.372002 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.80s 2026-02-28 00:38:09.372011 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.70s 2026-02-28 00:38:09.714567 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-28 00:38:21.763883 | orchestrator | 2026-02-28 00:38:21 | INFO  | Task a7155fee-69c5-4b27-80fc-865d0cf9010f (wait-for-connection) was prepared for execution. 2026-02-28 00:38:21.763988 | orchestrator | 2026-02-28 00:38:21 | INFO  | It takes a moment until task a7155fee-69c5-4b27-80fc-865d0cf9010f (wait-for-connection) has been started and output is visible here. 2026-02-28 00:38:37.845712 | orchestrator | 2026-02-28 00:38:37.845811 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-28 00:38:37.845825 | orchestrator | 2026-02-28 00:38:37.845834 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-28 00:38:37.845842 | orchestrator | Saturday 28 February 2026 00:38:25 +0000 (0:00:00.229) 0:00:00.230 ***** 2026-02-28 00:38:37.845849 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:38:37.845856 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:38:37.845863 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:38:37.845869 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:38:37.845876 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:38:37.845883 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:38:37.845891 | orchestrator | 2026-02-28 00:38:37.845899 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:38:37.845907 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:38:37.845917 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:38:37.845924 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:38:37.845931 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:38:37.845938 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:38:37.845944 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:38:37.845951 | orchestrator | 2026-02-28 00:38:37.845957 | orchestrator | 2026-02-28 00:38:37.845964 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:38:37.845970 | orchestrator | Saturday 28 February 2026 00:38:37 +0000 (0:00:11.525) 0:00:11.755 ***** 2026-02-28 00:38:37.845977 | orchestrator | =============================================================================== 2026-02-28 00:38:37.845984 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.53s 2026-02-28 00:38:38.190915 | orchestrator | + osism apply hddtemp 2026-02-28 00:38:50.211527 | orchestrator | 2026-02-28 00:38:50 | INFO  | Task be0d07fb-dda4-477d-ac23-5c8c8b06ba2b (hddtemp) was prepared for execution. 2026-02-28 00:38:50.211636 | orchestrator | 2026-02-28 00:38:50 | INFO  | It takes a moment until task be0d07fb-dda4-477d-ac23-5c8c8b06ba2b (hddtemp) has been started and output is visible here. 2026-02-28 00:39:19.041819 | orchestrator | 2026-02-28 00:39:19.041937 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-28 00:39:19.041956 | orchestrator | 2026-02-28 00:39:19.041970 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-28 00:39:19.041985 | orchestrator | Saturday 28 February 2026 00:38:55 +0000 (0:00:00.272) 0:00:00.272 ***** 2026-02-28 00:39:19.041998 | orchestrator | ok: [testbed-manager] 2026-02-28 00:39:19.042010 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:39:19.042088 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:39:19.042102 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:39:19.042141 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:39:19.042155 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:39:19.042200 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:39:19.042214 | orchestrator | 2026-02-28 00:39:19.042226 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-28 00:39:19.042239 | orchestrator | Saturday 28 February 2026 00:38:56 +0000 (0:00:00.855) 0:00:01.127 ***** 2026-02-28 00:39:19.042256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:39:19.042300 | orchestrator | 2026-02-28 00:39:19.042313 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-28 00:39:19.042326 | orchestrator | Saturday 28 February 2026 00:38:57 +0000 (0:00:01.277) 0:00:02.405 ***** 2026-02-28 00:39:19.042340 | orchestrator | ok: [testbed-manager] 2026-02-28 00:39:19.042353 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:39:19.042366 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:39:19.042378 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:39:19.042393 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:39:19.042407 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:39:19.042420 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:39:19.042433 | orchestrator | 2026-02-28 00:39:19.042446 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-28 00:39:19.042475 | orchestrator | Saturday 28 February 2026 00:38:59 +0000 (0:00:02.185) 0:00:04.591 ***** 2026-02-28 00:39:19.042489 | orchestrator | changed: [testbed-manager] 2026-02-28 00:39:19.042504 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:39:19.042519 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:39:19.042532 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:39:19.042547 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:39:19.042562 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:39:19.042575 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:39:19.042587 | orchestrator | 2026-02-28 00:39:19.042600 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-28 00:39:19.042614 | orchestrator | Saturday 28 February 2026 00:39:01 +0000 (0:00:01.216) 0:00:05.808 ***** 2026-02-28 00:39:19.042628 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:39:19.042643 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:39:19.042658 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:39:19.042672 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:39:19.042688 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:39:19.042703 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:39:19.042717 | orchestrator | ok: [testbed-manager] 2026-02-28 00:39:19.042729 | orchestrator | 2026-02-28 00:39:19.042741 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-28 00:39:19.042754 | orchestrator | Saturday 28 February 2026 00:39:02 +0000 (0:00:01.197) 0:00:07.005 ***** 2026-02-28 00:39:19.042767 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:39:19.042781 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:39:19.042795 | orchestrator | changed: [testbed-manager] 2026-02-28 00:39:19.042809 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:39:19.042822 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:39:19.042835 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:39:19.042848 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:39:19.042860 | orchestrator | 2026-02-28 00:39:19.042874 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-28 00:39:19.042887 | orchestrator | Saturday 28 February 2026 00:39:03 +0000 (0:00:00.873) 0:00:07.879 ***** 2026-02-28 00:39:19.042900 | orchestrator | changed: [testbed-manager] 2026-02-28 00:39:19.042912 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:39:19.042925 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:39:19.042937 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:39:19.042948 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:39:19.042960 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:39:19.042973 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:39:19.043201 | orchestrator | 2026-02-28 00:39:19.043226 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-28 00:39:19.043240 | orchestrator | Saturday 28 February 2026 00:39:15 +0000 (0:00:12.543) 0:00:20.422 ***** 2026-02-28 00:39:19.043254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:39:19.043279 | orchestrator | 2026-02-28 00:39:19.043288 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-28 00:39:19.043295 | orchestrator | Saturday 28 February 2026 00:39:16 +0000 (0:00:01.055) 0:00:21.478 ***** 2026-02-28 00:39:19.043303 | orchestrator | changed: [testbed-manager] 2026-02-28 00:39:19.043311 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:39:19.043320 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:39:19.043329 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:39:19.043342 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:39:19.043354 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:39:19.043367 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:39:19.043381 | orchestrator | 2026-02-28 00:39:19.043394 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:39:19.043408 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:39:19.043451 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:39:19.043465 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:39:19.043479 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:39:19.043493 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:39:19.043507 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:39:19.043520 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:39:19.043532 | orchestrator | 2026-02-28 00:39:19.043544 | orchestrator | 2026-02-28 00:39:19.043557 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:39:19.043571 | orchestrator | Saturday 28 February 2026 00:39:18 +0000 (0:00:01.825) 0:00:23.303 ***** 2026-02-28 00:39:19.043583 | orchestrator | =============================================================================== 2026-02-28 00:39:19.043597 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.54s 2026-02-28 00:39:19.043611 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.19s 2026-02-28 00:39:19.043624 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.83s 2026-02-28 00:39:19.043649 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.28s 2026-02-28 00:39:19.043664 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.22s 2026-02-28 00:39:19.043678 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.20s 2026-02-28 00:39:19.043691 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.06s 2026-02-28 00:39:19.043703 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.87s 2026-02-28 00:39:19.043716 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.86s 2026-02-28 00:39:19.392673 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-28 00:39:19.431654 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-28 00:39:19.431771 | orchestrator | + sudo systemctl restart manager.service 2026-02-28 00:39:32.801726 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-28 00:39:32.801835 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-28 00:39:32.801850 | orchestrator | + local max_attempts=60 2026-02-28 00:39:32.801861 | orchestrator | + local name=ceph-ansible 2026-02-28 00:39:32.801871 | orchestrator | + local attempt_num=1 2026-02-28 00:39:32.801881 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:32.836145 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:32.836240 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:32.836253 | orchestrator | + sleep 5 2026-02-28 00:39:37.839451 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:37.877428 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:37.877510 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:37.877520 | orchestrator | + sleep 5 2026-02-28 00:39:42.881369 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:42.921196 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:42.921280 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:42.921295 | orchestrator | + sleep 5 2026-02-28 00:39:47.925526 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:47.963392 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:47.963476 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:47.963490 | orchestrator | + sleep 5 2026-02-28 00:39:52.967800 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:53.005212 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:53.005310 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:53.005618 | orchestrator | + sleep 5 2026-02-28 00:39:58.010333 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:58.041547 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:58.041620 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:58.041632 | orchestrator | + sleep 5 2026-02-28 00:40:03.045913 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:03.087655 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:03.087736 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:03.087750 | orchestrator | + sleep 5 2026-02-28 00:40:08.091517 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:08.136771 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:08.138156 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:08.138198 | orchestrator | + sleep 5 2026-02-28 00:40:13.140952 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:13.165253 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:13.165322 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:13.165337 | orchestrator | + sleep 5 2026-02-28 00:40:18.168020 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:18.193921 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:18.194073 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:18.194144 | orchestrator | + sleep 5 2026-02-28 00:40:23.199411 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:23.240837 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:23.240930 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:23.240944 | orchestrator | + sleep 5 2026-02-28 00:40:28.245983 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:28.286658 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:28.286756 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:28.286770 | orchestrator | + sleep 5 2026-02-28 00:40:33.291235 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:33.332228 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:33.332386 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:33.332414 | orchestrator | + sleep 5 2026-02-28 00:40:38.336484 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:38.372605 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:38.372736 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-28 00:40:38.372766 | orchestrator | + local max_attempts=60 2026-02-28 00:40:38.372940 | orchestrator | + local name=kolla-ansible 2026-02-28 00:40:38.372983 | orchestrator | + local attempt_num=1 2026-02-28 00:40:38.373556 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-28 00:40:38.412921 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:38.413017 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-28 00:40:38.413060 | orchestrator | + local max_attempts=60 2026-02-28 00:40:38.413098 | orchestrator | + local name=osism-ansible 2026-02-28 00:40:38.413110 | orchestrator | + local attempt_num=1 2026-02-28 00:40:38.413442 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-28 00:40:38.446926 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:38.447038 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-28 00:40:38.447061 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-28 00:40:38.616578 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-28 00:40:38.778913 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-28 00:40:38.924023 | orchestrator | ARA in osism-ansible already disabled. 2026-02-28 00:40:39.069608 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-28 00:40:39.071249 | orchestrator | + osism apply gather-facts 2026-02-28 00:40:51.357397 | orchestrator | 2026-02-28 00:40:51 | INFO  | Task dfe20264-94f7-4754-af3e-eaf96d9f9990 (gather-facts) was prepared for execution. 2026-02-28 00:40:51.357504 | orchestrator | 2026-02-28 00:40:51 | INFO  | It takes a moment until task dfe20264-94f7-4754-af3e-eaf96d9f9990 (gather-facts) has been started and output is visible here. 2026-02-28 00:41:04.511562 | orchestrator | 2026-02-28 00:41:04.511676 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-28 00:41:04.511701 | orchestrator | 2026-02-28 00:41:04.511719 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:41:04.511738 | orchestrator | Saturday 28 February 2026 00:40:55 +0000 (0:00:00.224) 0:00:00.224 ***** 2026-02-28 00:41:04.511759 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:41:04.511778 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:41:04.511796 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:41:04.511814 | orchestrator | ok: [testbed-manager] 2026-02-28 00:41:04.511834 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:41:04.511855 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:41:04.511874 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:41:04.511888 | orchestrator | 2026-02-28 00:41:04.511899 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-28 00:41:04.511911 | orchestrator | 2026-02-28 00:41:04.511922 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-28 00:41:04.511934 | orchestrator | Saturday 28 February 2026 00:41:03 +0000 (0:00:07.917) 0:00:08.141 ***** 2026-02-28 00:41:04.511945 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:41:04.511957 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:41:04.511971 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:41:04.511990 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:41:04.512008 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:41:04.512025 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:41:04.512043 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:41:04.512126 | orchestrator | 2026-02-28 00:41:04.512152 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:41:04.512172 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:41:04.512185 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:41:04.512196 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:41:04.512207 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:41:04.512218 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:41:04.512230 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:41:04.512287 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:41:04.512308 | orchestrator | 2026-02-28 00:41:04.512324 | orchestrator | 2026-02-28 00:41:04.512342 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:41:04.512359 | orchestrator | Saturday 28 February 2026 00:41:04 +0000 (0:00:00.504) 0:00:08.645 ***** 2026-02-28 00:41:04.512377 | orchestrator | =============================================================================== 2026-02-28 00:41:04.512394 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.92s 2026-02-28 00:41:04.512412 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-02-28 00:41:04.870739 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-28 00:41:04.885991 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-28 00:41:04.899655 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-28 00:41:04.912581 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-28 00:41:04.930747 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-28 00:41:04.942753 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-28 00:41:04.952390 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-28 00:41:04.961651 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-28 00:41:04.980122 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-28 00:41:04.994485 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-28 00:41:05.006922 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-28 00:41:05.023624 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-28 00:41:05.037527 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-28 00:41:05.050184 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-28 00:41:05.074220 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-28 00:41:05.097213 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-28 00:41:05.108227 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-28 00:41:05.134304 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-28 00:41:05.146054 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-28 00:41:05.163558 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-28 00:41:05.174967 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-28 00:41:05.199270 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-28 00:41:05.216016 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-28 00:41:05.236447 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-28 00:41:05.347534 | orchestrator | ok: Runtime: 0:24:35.938349 2026-02-28 00:41:05.438238 | 2026-02-28 00:41:05.438373 | TASK [Deploy services] 2026-02-28 00:41:05.972406 | orchestrator | skipping: Conditional result was False 2026-02-28 00:41:05.992207 | 2026-02-28 00:41:05.992397 | TASK [Deploy in a nutshell] 2026-02-28 00:41:06.711255 | orchestrator | + set -e 2026-02-28 00:41:06.711985 | orchestrator | 2026-02-28 00:41:06.712022 | orchestrator | # PULL IMAGES 2026-02-28 00:41:06.712036 | orchestrator | 2026-02-28 00:41:06.712055 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-28 00:41:06.712098 | orchestrator | ++ export INTERACTIVE=false 2026-02-28 00:41:06.712112 | orchestrator | ++ INTERACTIVE=false 2026-02-28 00:41:06.712157 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-28 00:41:06.712181 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-28 00:41:06.712196 | orchestrator | + source /opt/manager-vars.sh 2026-02-28 00:41:06.712208 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-28 00:41:06.712227 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-28 00:41:06.712238 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-28 00:41:06.712256 | orchestrator | ++ CEPH_VERSION=reef 2026-02-28 00:41:06.712267 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-28 00:41:06.712284 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-28 00:41:06.712295 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-28 00:41:06.712309 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-28 00:41:06.712321 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-28 00:41:06.712333 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-28 00:41:06.712344 | orchestrator | ++ export ARA=false 2026-02-28 00:41:06.712355 | orchestrator | ++ ARA=false 2026-02-28 00:41:06.712366 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-28 00:41:06.712377 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-28 00:41:06.712388 | orchestrator | ++ export TEMPEST=true 2026-02-28 00:41:06.712399 | orchestrator | ++ TEMPEST=true 2026-02-28 00:41:06.712409 | orchestrator | ++ export IS_ZUUL=true 2026-02-28 00:41:06.712421 | orchestrator | ++ IS_ZUUL=true 2026-02-28 00:41:06.712431 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.181 2026-02-28 00:41:06.712444 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.181 2026-02-28 00:41:06.712454 | orchestrator | ++ export EXTERNAL_API=false 2026-02-28 00:41:06.712465 | orchestrator | ++ EXTERNAL_API=false 2026-02-28 00:41:06.712476 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-28 00:41:06.712488 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-28 00:41:06.712499 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-28 00:41:06.712510 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-28 00:41:06.712521 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-28 00:41:06.712540 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-28 00:41:06.712551 | orchestrator | + echo 2026-02-28 00:41:06.712563 | orchestrator | + echo '# PULL IMAGES' 2026-02-28 00:41:06.712574 | orchestrator | + echo 2026-02-28 00:41:06.712846 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-28 00:41:06.765185 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-28 00:41:06.765305 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-28 00:41:08.684606 | orchestrator | 2026-02-28 00:41:08 | INFO  | Trying to run play pull-images in environment custom 2026-02-28 00:41:18.795651 | orchestrator | 2026-02-28 00:41:18 | INFO  | Task db5d5761-63df-45ad-9fbf-571ffdefbc82 (pull-images) was prepared for execution. 2026-02-28 00:41:18.795744 | orchestrator | 2026-02-28 00:41:18 | INFO  | Task db5d5761-63df-45ad-9fbf-571ffdefbc82 is running in background. No more output. Check ARA for logs. 2026-02-28 00:41:20.789491 | orchestrator | 2026-02-28 00:41:20 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-28 00:41:30.998330 | orchestrator | 2026-02-28 00:41:30 | INFO  | Task 43fde170-29ff-4bcd-b5a5-be3d96088b22 (wipe-partitions) was prepared for execution. 2026-02-28 00:41:30.998418 | orchestrator | 2026-02-28 00:41:30 | INFO  | It takes a moment until task 43fde170-29ff-4bcd-b5a5-be3d96088b22 (wipe-partitions) has been started and output is visible here. 2026-02-28 00:41:43.534412 | orchestrator | 2026-02-28 00:41:43.534466 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-28 00:41:43.534472 | orchestrator | 2026-02-28 00:41:43.534476 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-28 00:41:43.534483 | orchestrator | Saturday 28 February 2026 00:41:35 +0000 (0:00:00.129) 0:00:00.129 ***** 2026-02-28 00:41:43.534487 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:41:43.534491 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:41:43.534495 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:41:43.534499 | orchestrator | 2026-02-28 00:41:43.534504 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-28 00:41:43.534519 | orchestrator | Saturday 28 February 2026 00:41:36 +0000 (0:00:00.575) 0:00:00.705 ***** 2026-02-28 00:41:43.534523 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:41:43.534527 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:41:43.534531 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:41:43.534537 | orchestrator | 2026-02-28 00:41:43.534541 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-28 00:41:43.534545 | orchestrator | Saturday 28 February 2026 00:41:36 +0000 (0:00:00.367) 0:00:01.072 ***** 2026-02-28 00:41:43.534549 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:41:43.534553 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:41:43.534557 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:41:43.534561 | orchestrator | 2026-02-28 00:41:43.534565 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-28 00:41:43.534569 | orchestrator | Saturday 28 February 2026 00:41:36 +0000 (0:00:00.561) 0:00:01.634 ***** 2026-02-28 00:41:43.534573 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:41:43.534576 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:41:43.534580 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:41:43.534584 | orchestrator | 2026-02-28 00:41:43.534588 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-28 00:41:43.534592 | orchestrator | Saturday 28 February 2026 00:41:37 +0000 (0:00:00.280) 0:00:01.915 ***** 2026-02-28 00:41:43.534596 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-28 00:41:43.534601 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-28 00:41:43.534605 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-28 00:41:43.534609 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-28 00:41:43.534613 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-28 00:41:43.534616 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-28 00:41:43.534620 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-28 00:41:43.534624 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-28 00:41:43.534628 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-28 00:41:43.534632 | orchestrator | 2026-02-28 00:41:43.534635 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-28 00:41:43.534639 | orchestrator | Saturday 28 February 2026 00:41:38 +0000 (0:00:01.204) 0:00:03.120 ***** 2026-02-28 00:41:43.534643 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-28 00:41:43.534647 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-28 00:41:43.534651 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-28 00:41:43.534655 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-28 00:41:43.534659 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-28 00:41:43.534662 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-28 00:41:43.534666 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-28 00:41:43.534670 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-28 00:41:43.534674 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-28 00:41:43.534677 | orchestrator | 2026-02-28 00:41:43.534681 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-28 00:41:43.534685 | orchestrator | Saturday 28 February 2026 00:41:39 +0000 (0:00:01.510) 0:00:04.631 ***** 2026-02-28 00:41:43.534689 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-28 00:41:43.534693 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-28 00:41:43.534697 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-28 00:41:43.534700 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-28 00:41:43.534704 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-28 00:41:43.534708 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-28 00:41:43.534712 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-28 00:41:43.534715 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-28 00:41:43.534725 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-28 00:41:43.534729 | orchestrator | 2026-02-28 00:41:43.534733 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-28 00:41:43.534737 | orchestrator | Saturday 28 February 2026 00:41:42 +0000 (0:00:02.185) 0:00:06.816 ***** 2026-02-28 00:41:43.534741 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:41:43.534745 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:41:43.534748 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:41:43.534752 | orchestrator | 2026-02-28 00:41:43.534756 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-28 00:41:43.534760 | orchestrator | Saturday 28 February 2026 00:41:42 +0000 (0:00:00.556) 0:00:07.373 ***** 2026-02-28 00:41:43.534764 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:41:43.534767 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:41:43.534771 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:41:43.534775 | orchestrator | 2026-02-28 00:41:43.534779 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:41:43.534783 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:41:43.534788 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:41:43.534800 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:41:43.534804 | orchestrator | 2026-02-28 00:41:43.534807 | orchestrator | 2026-02-28 00:41:43.534811 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:41:43.534815 | orchestrator | Saturday 28 February 2026 00:41:43 +0000 (0:00:00.584) 0:00:07.958 ***** 2026-02-28 00:41:43.534819 | orchestrator | =============================================================================== 2026-02-28 00:41:43.534823 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.19s 2026-02-28 00:41:43.534826 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.51s 2026-02-28 00:41:43.534830 | orchestrator | Check device availability ----------------------------------------------- 1.20s 2026-02-28 00:41:43.534834 | orchestrator | Request device events from the kernel ----------------------------------- 0.59s 2026-02-28 00:41:43.534838 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2026-02-28 00:41:43.534841 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.56s 2026-02-28 00:41:43.534845 | orchestrator | Reload udev rules ------------------------------------------------------- 0.56s 2026-02-28 00:41:43.534849 | orchestrator | Remove all rook related logical devices --------------------------------- 0.37s 2026-02-28 00:41:43.534853 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2026-02-28 00:41:55.716750 | orchestrator | 2026-02-28 00:41:55 | INFO  | Task 65422a89-c1ab-4502-a05d-2922c0eba4b8 (facts) was prepared for execution. 2026-02-28 00:41:55.716879 | orchestrator | 2026-02-28 00:41:55 | INFO  | It takes a moment until task 65422a89-c1ab-4502-a05d-2922c0eba4b8 (facts) has been started and output is visible here. 2026-02-28 00:42:09.028073 | orchestrator | 2026-02-28 00:42:09.028166 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-28 00:42:09.028180 | orchestrator | 2026-02-28 00:42:09.028189 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-28 00:42:09.028199 | orchestrator | Saturday 28 February 2026 00:42:00 +0000 (0:00:00.266) 0:00:00.266 ***** 2026-02-28 00:42:09.028208 | orchestrator | ok: [testbed-manager] 2026-02-28 00:42:09.028218 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:42:09.028227 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:42:09.028236 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:42:09.028269 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:42:09.028278 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:42:09.028287 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:42:09.028296 | orchestrator | 2026-02-28 00:42:09.028305 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-28 00:42:09.028313 | orchestrator | Saturday 28 February 2026 00:42:01 +0000 (0:00:01.075) 0:00:01.341 ***** 2026-02-28 00:42:09.028322 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:42:09.028332 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:42:09.028340 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:42:09.028349 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:42:09.028358 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:09.028366 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:09.028375 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:09.028384 | orchestrator | 2026-02-28 00:42:09.028393 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-28 00:42:09.028401 | orchestrator | 2026-02-28 00:42:09.028424 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:42:09.028434 | orchestrator | Saturday 28 February 2026 00:42:02 +0000 (0:00:01.233) 0:00:02.574 ***** 2026-02-28 00:42:09.028443 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:42:09.028451 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:42:09.028460 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:42:09.028469 | orchestrator | ok: [testbed-manager] 2026-02-28 00:42:09.028478 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:42:09.028487 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:42:09.028496 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:42:09.028504 | orchestrator | 2026-02-28 00:42:09.028513 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-28 00:42:09.028522 | orchestrator | 2026-02-28 00:42:09.028531 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-28 00:42:09.028540 | orchestrator | Saturday 28 February 2026 00:42:08 +0000 (0:00:05.695) 0:00:08.270 ***** 2026-02-28 00:42:09.028548 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:42:09.028557 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:42:09.028565 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:42:09.028574 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:42:09.028582 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:09.028591 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:09.028599 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:09.028610 | orchestrator | 2026-02-28 00:42:09.028621 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:42:09.028631 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:09.028643 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:09.028654 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:09.028664 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:09.028674 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:09.028685 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:09.028695 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:42:09.028706 | orchestrator | 2026-02-28 00:42:09.028716 | orchestrator | 2026-02-28 00:42:09.028726 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:42:09.028742 | orchestrator | Saturday 28 February 2026 00:42:08 +0000 (0:00:00.541) 0:00:08.811 ***** 2026-02-28 00:42:09.028751 | orchestrator | =============================================================================== 2026-02-28 00:42:09.028759 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.70s 2026-02-28 00:42:09.028768 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2026-02-28 00:42:09.028777 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2026-02-28 00:42:09.028786 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-02-28 00:42:11.578696 | orchestrator | 2026-02-28 00:42:11 | INFO  | Task cc172cf2-c190-407d-829c-87a48add666e (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-28 00:42:11.578784 | orchestrator | 2026-02-28 00:42:11 | INFO  | It takes a moment until task cc172cf2-c190-407d-829c-87a48add666e (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-28 00:42:23.644151 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-28 00:42:23.644294 | orchestrator | 2.16.14 2026-02-28 00:42:23.644322 | orchestrator | 2026-02-28 00:42:23.644343 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-28 00:42:23.644364 | orchestrator | 2026-02-28 00:42:23.644386 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:42:23.644399 | orchestrator | Saturday 28 February 2026 00:42:16 +0000 (0:00:00.345) 0:00:00.345 ***** 2026-02-28 00:42:23.644412 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-28 00:42:23.644423 | orchestrator | 2026-02-28 00:42:23.644434 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:42:23.644445 | orchestrator | Saturday 28 February 2026 00:42:16 +0000 (0:00:00.249) 0:00:00.594 ***** 2026-02-28 00:42:23.644456 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:42:23.644467 | orchestrator | 2026-02-28 00:42:23.644478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:23.644489 | orchestrator | Saturday 28 February 2026 00:42:16 +0000 (0:00:00.214) 0:00:00.808 ***** 2026-02-28 00:42:23.644498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-28 00:42:23.644521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-28 00:42:23.644532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-28 00:42:23.644542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-28 00:42:23.644551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-28 00:42:23.644561 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-28 00:42:23.644571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-28 00:42:23.644580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-28 00:42:23.644590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-28 00:42:23.644607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-28 00:42:23.644632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-28 00:42:23.644651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-28 00:42:23.644666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-28 00:42:23.644681 | orchestrator | 2026-02-28 00:42:23.644697 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:23.644713 | orchestrator | Saturday 28 February 2026 00:42:17 +0000 (0:00:00.473) 0:00:01.282 ***** 2026-02-28 00:42:23.644757 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:23.644795 | orchestrator | 2026-02-28 00:42:23.644822 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:23.644838 | orchestrator | Saturday 28 February 2026 00:42:17 +0000 (0:00:00.227) 0:00:01.510 ***** 2026-02-28 00:42:23.644856 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:23.644873 | orchestrator | 2026-02-28 00:42:23.644889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:23.644906 | orchestrator | Saturday 28 February 2026 00:42:17 +0000 (0:00:00.208) 0:00:01.719 ***** 2026-02-28 00:42:23.644923 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:23.644941 | orchestrator | 2026-02-28 00:42:23.644957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:23.644973 | orchestrator | Saturday 28 February 2026 00:42:17 +0000 (0:00:00.212) 0:00:01.931 ***** 2026-02-28 00:42:23.644992 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:23.645003 | orchestrator | 2026-02-28 00:42:23.645012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:23.645022 | orchestrator | Saturday 28 February 2026 00:42:18 +0000 (0:00:00.237) 0:00:02.168 ***** 2026-02-28 00:42:23.645031 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:23.645100 | orchestrator | 2026-02-28 00:42:23.645111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:23.645121 | orchestrator | Saturday 28 February 2026 00:42:18 +0000 (0:00:00.218) 0:00:02.386 ***** 2026-02-28 00:42:23.645131 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:23.645140 | orchestrator | 2026-02-28 00:42:23.645150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:23.645159 | orchestrator | Saturday 28 February 2026 00:42:18 +0000 (0:00:00.224) 0:00:02.611 ***** 2026-02-28 00:42:23.645169 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:23.645178 | orchestrator | 2026-02-28 00:42:23.645188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:23.645197 | orchestrator | Saturday 28 February 2026 00:42:18 +0000 (0:00:00.217) 0:00:02.828 ***** 2026-02-28 00:42:23.645207 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:23.645216 | orchestrator | 2026-02-28 00:42:23.645226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:23.645236 | orchestrator | Saturday 28 February 2026 00:42:18 +0000 (0:00:00.196) 0:00:03.024 ***** 2026-02-28 00:42:23.645245 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5) 2026-02-28 00:42:23.645257 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5) 2026-02-28 00:42:23.645266 | orchestrator | 2026-02-28 00:42:23.645276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:23.645307 | orchestrator | Saturday 28 February 2026 00:42:19 +0000 (0:00:00.417) 0:00:03.442 ***** 2026-02-28 00:42:23.645317 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5a576e70-544d-44fd-a16d-0d3a23dfbf81) 2026-02-28 00:42:23.645335 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5a576e70-544d-44fd-a16d-0d3a23dfbf81) 2026-02-28 00:42:23.645349 | orchestrator | 2026-02-28 00:42:23.645366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:23.645381 | orchestrator | Saturday 28 February 2026 00:42:19 +0000 (0:00:00.652) 0:00:04.094 ***** 2026-02-28 00:42:23.645398 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_72888dc0-89fa-4d82-a9e9-f7d921f86abf) 2026-02-28 00:42:23.645414 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_72888dc0-89fa-4d82-a9e9-f7d921f86abf) 2026-02-28 00:42:23.645429 | orchestrator | 2026-02-28 00:42:23.645445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:23.645461 | orchestrator | Saturday 28 February 2026 00:42:20 +0000 (0:00:00.648) 0:00:04.742 ***** 2026-02-28 00:42:23.645492 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7a83ed65-2ee8-47d4-9c51-9fbd7e5801de) 2026-02-28 00:42:23.645509 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7a83ed65-2ee8-47d4-9c51-9fbd7e5801de) 2026-02-28 00:42:23.645525 | orchestrator | 2026-02-28 00:42:23.645541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:23.645557 | orchestrator | Saturday 28 February 2026 00:42:21 +0000 (0:00:00.867) 0:00:05.610 ***** 2026-02-28 00:42:23.645573 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:42:23.645590 | orchestrator | 2026-02-28 00:42:23.645607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:23.645623 | orchestrator | Saturday 28 February 2026 00:42:21 +0000 (0:00:00.361) 0:00:05.971 ***** 2026-02-28 00:42:23.645637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-28 00:42:23.645647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-28 00:42:23.645656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-28 00:42:23.645666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-28 00:42:23.645675 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-28 00:42:23.645685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-28 00:42:23.645694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-28 00:42:23.645704 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-28 00:42:23.645714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-28 00:42:23.645723 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-28 00:42:23.645733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-28 00:42:23.645742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-28 00:42:23.645752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-28 00:42:23.645761 | orchestrator | 2026-02-28 00:42:23.645771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:23.645780 | orchestrator | Saturday 28 February 2026 00:42:22 +0000 (0:00:00.375) 0:00:06.347 ***** 2026-02-28 00:42:23.645790 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:23.645800 | orchestrator | 2026-02-28 00:42:23.645809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:23.645819 | orchestrator | Saturday 28 February 2026 00:42:22 +0000 (0:00:00.196) 0:00:06.543 ***** 2026-02-28 00:42:23.645828 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:23.645838 | orchestrator | 2026-02-28 00:42:23.645847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:23.645858 | orchestrator | Saturday 28 February 2026 00:42:22 +0000 (0:00:00.205) 0:00:06.749 ***** 2026-02-28 00:42:23.645875 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:23.645890 | orchestrator | 2026-02-28 00:42:23.645906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:23.645921 | orchestrator | Saturday 28 February 2026 00:42:22 +0000 (0:00:00.203) 0:00:06.953 ***** 2026-02-28 00:42:23.645937 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:23.645952 | orchestrator | 2026-02-28 00:42:23.645968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:23.645984 | orchestrator | Saturday 28 February 2026 00:42:23 +0000 (0:00:00.204) 0:00:07.158 ***** 2026-02-28 00:42:23.646102 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:23.646127 | orchestrator | 2026-02-28 00:42:23.646145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:23.646162 | orchestrator | Saturday 28 February 2026 00:42:23 +0000 (0:00:00.185) 0:00:07.343 ***** 2026-02-28 00:42:23.646182 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:23.646199 | orchestrator | 2026-02-28 00:42:23.646218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:23.646236 | orchestrator | Saturday 28 February 2026 00:42:23 +0000 (0:00:00.217) 0:00:07.560 ***** 2026-02-28 00:42:23.646255 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:23.646273 | orchestrator | 2026-02-28 00:42:23.646304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:30.808848 | orchestrator | Saturday 28 February 2026 00:42:23 +0000 (0:00:00.192) 0:00:07.752 ***** 2026-02-28 00:42:30.808944 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.808961 | orchestrator | 2026-02-28 00:42:30.808974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:30.808985 | orchestrator | Saturday 28 February 2026 00:42:23 +0000 (0:00:00.218) 0:00:07.971 ***** 2026-02-28 00:42:30.808997 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-28 00:42:30.809025 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-28 00:42:30.809082 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-28 00:42:30.809096 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-28 00:42:30.809107 | orchestrator | 2026-02-28 00:42:30.809118 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:30.809129 | orchestrator | Saturday 28 February 2026 00:42:24 +0000 (0:00:01.086) 0:00:09.058 ***** 2026-02-28 00:42:30.809140 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.809151 | orchestrator | 2026-02-28 00:42:30.809162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:30.809173 | orchestrator | Saturday 28 February 2026 00:42:25 +0000 (0:00:00.206) 0:00:09.264 ***** 2026-02-28 00:42:30.809184 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.809195 | orchestrator | 2026-02-28 00:42:30.809206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:30.809217 | orchestrator | Saturday 28 February 2026 00:42:25 +0000 (0:00:00.209) 0:00:09.473 ***** 2026-02-28 00:42:30.809228 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.809239 | orchestrator | 2026-02-28 00:42:30.809250 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:30.809260 | orchestrator | Saturday 28 February 2026 00:42:25 +0000 (0:00:00.209) 0:00:09.683 ***** 2026-02-28 00:42:30.809271 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.809282 | orchestrator | 2026-02-28 00:42:30.809293 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-28 00:42:30.809304 | orchestrator | Saturday 28 February 2026 00:42:25 +0000 (0:00:00.209) 0:00:09.893 ***** 2026-02-28 00:42:30.809315 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-28 00:42:30.809326 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-28 00:42:30.809336 | orchestrator | 2026-02-28 00:42:30.809347 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-28 00:42:30.809358 | orchestrator | Saturday 28 February 2026 00:42:25 +0000 (0:00:00.169) 0:00:10.063 ***** 2026-02-28 00:42:30.809369 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.809380 | orchestrator | 2026-02-28 00:42:30.809391 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-28 00:42:30.809404 | orchestrator | Saturday 28 February 2026 00:42:26 +0000 (0:00:00.127) 0:00:10.190 ***** 2026-02-28 00:42:30.809416 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.809428 | orchestrator | 2026-02-28 00:42:30.809441 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-28 00:42:30.809453 | orchestrator | Saturday 28 February 2026 00:42:26 +0000 (0:00:00.142) 0:00:10.332 ***** 2026-02-28 00:42:30.809487 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.809499 | orchestrator | 2026-02-28 00:42:30.809512 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-28 00:42:30.809524 | orchestrator | Saturday 28 February 2026 00:42:26 +0000 (0:00:00.136) 0:00:10.468 ***** 2026-02-28 00:42:30.809536 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:42:30.809549 | orchestrator | 2026-02-28 00:42:30.809561 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-28 00:42:30.809574 | orchestrator | Saturday 28 February 2026 00:42:26 +0000 (0:00:00.137) 0:00:10.606 ***** 2026-02-28 00:42:30.809586 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '867868d0-bc68-54b2-8c81-3bd5cfa2d741'}}) 2026-02-28 00:42:30.809598 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee950762-4564-5222-9e83-52313bf46222'}}) 2026-02-28 00:42:30.809609 | orchestrator | 2026-02-28 00:42:30.809620 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-28 00:42:30.809632 | orchestrator | Saturday 28 February 2026 00:42:26 +0000 (0:00:00.177) 0:00:10.783 ***** 2026-02-28 00:42:30.809644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '867868d0-bc68-54b2-8c81-3bd5cfa2d741'}})  2026-02-28 00:42:30.809662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee950762-4564-5222-9e83-52313bf46222'}})  2026-02-28 00:42:30.809673 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.809684 | orchestrator | 2026-02-28 00:42:30.809695 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-28 00:42:30.809706 | orchestrator | Saturday 28 February 2026 00:42:26 +0000 (0:00:00.142) 0:00:10.925 ***** 2026-02-28 00:42:30.809717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '867868d0-bc68-54b2-8c81-3bd5cfa2d741'}})  2026-02-28 00:42:30.809728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee950762-4564-5222-9e83-52313bf46222'}})  2026-02-28 00:42:30.809739 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.809750 | orchestrator | 2026-02-28 00:42:30.809761 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-28 00:42:30.809772 | orchestrator | Saturday 28 February 2026 00:42:27 +0000 (0:00:00.369) 0:00:11.295 ***** 2026-02-28 00:42:30.809782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '867868d0-bc68-54b2-8c81-3bd5cfa2d741'}})  2026-02-28 00:42:30.809809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee950762-4564-5222-9e83-52313bf46222'}})  2026-02-28 00:42:30.809820 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.809831 | orchestrator | 2026-02-28 00:42:30.809857 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-28 00:42:30.809878 | orchestrator | Saturday 28 February 2026 00:42:27 +0000 (0:00:00.159) 0:00:11.455 ***** 2026-02-28 00:42:30.809889 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:42:30.809899 | orchestrator | 2026-02-28 00:42:30.809910 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-28 00:42:30.809921 | orchestrator | Saturday 28 February 2026 00:42:27 +0000 (0:00:00.145) 0:00:11.600 ***** 2026-02-28 00:42:30.809932 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:42:30.809943 | orchestrator | 2026-02-28 00:42:30.809954 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-28 00:42:30.809964 | orchestrator | Saturday 28 February 2026 00:42:27 +0000 (0:00:00.153) 0:00:11.754 ***** 2026-02-28 00:42:30.809975 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.809986 | orchestrator | 2026-02-28 00:42:30.809997 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-28 00:42:30.810007 | orchestrator | Saturday 28 February 2026 00:42:27 +0000 (0:00:00.146) 0:00:11.901 ***** 2026-02-28 00:42:30.810095 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.810109 | orchestrator | 2026-02-28 00:42:30.810120 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-28 00:42:30.810131 | orchestrator | Saturday 28 February 2026 00:42:27 +0000 (0:00:00.127) 0:00:12.028 ***** 2026-02-28 00:42:30.810142 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.810153 | orchestrator | 2026-02-28 00:42:30.810164 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-28 00:42:30.810174 | orchestrator | Saturday 28 February 2026 00:42:28 +0000 (0:00:00.134) 0:00:12.162 ***** 2026-02-28 00:42:30.810185 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:42:30.810196 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:42:30.810207 | orchestrator |  "sdb": { 2026-02-28 00:42:30.810217 | orchestrator |  "osd_lvm_uuid": "867868d0-bc68-54b2-8c81-3bd5cfa2d741" 2026-02-28 00:42:30.810228 | orchestrator |  }, 2026-02-28 00:42:30.810239 | orchestrator |  "sdc": { 2026-02-28 00:42:30.810250 | orchestrator |  "osd_lvm_uuid": "ee950762-4564-5222-9e83-52313bf46222" 2026-02-28 00:42:30.810261 | orchestrator |  } 2026-02-28 00:42:30.810271 | orchestrator |  } 2026-02-28 00:42:30.810282 | orchestrator | } 2026-02-28 00:42:30.810293 | orchestrator | 2026-02-28 00:42:30.810303 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-28 00:42:30.810320 | orchestrator | Saturday 28 February 2026 00:42:28 +0000 (0:00:00.141) 0:00:12.304 ***** 2026-02-28 00:42:30.810331 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.810341 | orchestrator | 2026-02-28 00:42:30.810352 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-28 00:42:30.810363 | orchestrator | Saturday 28 February 2026 00:42:28 +0000 (0:00:00.116) 0:00:12.420 ***** 2026-02-28 00:42:30.810374 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.810384 | orchestrator | 2026-02-28 00:42:30.810395 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-28 00:42:30.810406 | orchestrator | Saturday 28 February 2026 00:42:28 +0000 (0:00:00.105) 0:00:12.526 ***** 2026-02-28 00:42:30.810417 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:30.810427 | orchestrator | 2026-02-28 00:42:30.810438 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-28 00:42:30.810449 | orchestrator | Saturday 28 February 2026 00:42:28 +0000 (0:00:00.104) 0:00:12.630 ***** 2026-02-28 00:42:30.810459 | orchestrator | changed: [testbed-node-3] => { 2026-02-28 00:42:30.810470 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-28 00:42:30.810481 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:42:30.810491 | orchestrator |  "sdb": { 2026-02-28 00:42:30.810502 | orchestrator |  "osd_lvm_uuid": "867868d0-bc68-54b2-8c81-3bd5cfa2d741" 2026-02-28 00:42:30.810513 | orchestrator |  }, 2026-02-28 00:42:30.810523 | orchestrator |  "sdc": { 2026-02-28 00:42:30.810534 | orchestrator |  "osd_lvm_uuid": "ee950762-4564-5222-9e83-52313bf46222" 2026-02-28 00:42:30.810545 | orchestrator |  } 2026-02-28 00:42:30.810555 | orchestrator |  }, 2026-02-28 00:42:30.810566 | orchestrator |  "lvm_volumes": [ 2026-02-28 00:42:30.810577 | orchestrator |  { 2026-02-28 00:42:30.810588 | orchestrator |  "data": "osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741", 2026-02-28 00:42:30.810599 | orchestrator |  "data_vg": "ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741" 2026-02-28 00:42:30.810609 | orchestrator |  }, 2026-02-28 00:42:30.810620 | orchestrator |  { 2026-02-28 00:42:30.810631 | orchestrator |  "data": "osd-block-ee950762-4564-5222-9e83-52313bf46222", 2026-02-28 00:42:30.810641 | orchestrator |  "data_vg": "ceph-ee950762-4564-5222-9e83-52313bf46222" 2026-02-28 00:42:30.810652 | orchestrator |  } 2026-02-28 00:42:30.810663 | orchestrator |  ] 2026-02-28 00:42:30.810673 | orchestrator |  } 2026-02-28 00:42:30.810684 | orchestrator | } 2026-02-28 00:42:30.810702 | orchestrator | 2026-02-28 00:42:30.810712 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-28 00:42:30.810723 | orchestrator | Saturday 28 February 2026 00:42:28 +0000 (0:00:00.315) 0:00:12.945 ***** 2026-02-28 00:42:30.810734 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-28 00:42:30.810745 | orchestrator | 2026-02-28 00:42:30.810755 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-28 00:42:30.810766 | orchestrator | 2026-02-28 00:42:30.810777 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:42:30.810787 | orchestrator | Saturday 28 February 2026 00:42:30 +0000 (0:00:01.527) 0:00:14.473 ***** 2026-02-28 00:42:30.810798 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-28 00:42:30.810808 | orchestrator | 2026-02-28 00:42:30.810819 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:42:30.810830 | orchestrator | Saturday 28 February 2026 00:42:30 +0000 (0:00:00.232) 0:00:14.706 ***** 2026-02-28 00:42:30.810841 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:42:30.810851 | orchestrator | 2026-02-28 00:42:30.810869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:39.414431 | orchestrator | Saturday 28 February 2026 00:42:30 +0000 (0:00:00.213) 0:00:14.920 ***** 2026-02-28 00:42:39.414509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-28 00:42:39.414518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-28 00:42:39.414525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-28 00:42:39.414532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-28 00:42:39.414538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-28 00:42:39.414545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-28 00:42:39.414551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-28 00:42:39.414572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-28 00:42:39.414579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-28 00:42:39.414585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-28 00:42:39.414591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-28 00:42:39.414598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-28 00:42:39.414606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-28 00:42:39.414613 | orchestrator | 2026-02-28 00:42:39.414620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:39.414627 | orchestrator | Saturday 28 February 2026 00:42:31 +0000 (0:00:00.295) 0:00:15.215 ***** 2026-02-28 00:42:39.414633 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.414640 | orchestrator | 2026-02-28 00:42:39.414647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:39.414653 | orchestrator | Saturday 28 February 2026 00:42:31 +0000 (0:00:00.137) 0:00:15.353 ***** 2026-02-28 00:42:39.414659 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.414665 | orchestrator | 2026-02-28 00:42:39.414672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:39.414678 | orchestrator | Saturday 28 February 2026 00:42:31 +0000 (0:00:00.181) 0:00:15.534 ***** 2026-02-28 00:42:39.414684 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.414691 | orchestrator | 2026-02-28 00:42:39.414697 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:39.414703 | orchestrator | Saturday 28 February 2026 00:42:31 +0000 (0:00:00.162) 0:00:15.697 ***** 2026-02-28 00:42:39.414727 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.414734 | orchestrator | 2026-02-28 00:42:39.414740 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:39.414747 | orchestrator | Saturday 28 February 2026 00:42:31 +0000 (0:00:00.158) 0:00:15.855 ***** 2026-02-28 00:42:39.414753 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.414759 | orchestrator | 2026-02-28 00:42:39.414765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:39.414772 | orchestrator | Saturday 28 February 2026 00:42:32 +0000 (0:00:00.464) 0:00:16.320 ***** 2026-02-28 00:42:39.414778 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.414784 | orchestrator | 2026-02-28 00:42:39.414791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:39.414797 | orchestrator | Saturday 28 February 2026 00:42:32 +0000 (0:00:00.178) 0:00:16.499 ***** 2026-02-28 00:42:39.414803 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.414809 | orchestrator | 2026-02-28 00:42:39.414815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:39.414821 | orchestrator | Saturday 28 February 2026 00:42:32 +0000 (0:00:00.202) 0:00:16.701 ***** 2026-02-28 00:42:39.414828 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.414834 | orchestrator | 2026-02-28 00:42:39.414840 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:39.414846 | orchestrator | Saturday 28 February 2026 00:42:32 +0000 (0:00:00.209) 0:00:16.911 ***** 2026-02-28 00:42:39.414852 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20) 2026-02-28 00:42:39.414860 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20) 2026-02-28 00:42:39.414866 | orchestrator | 2026-02-28 00:42:39.414872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:39.414879 | orchestrator | Saturday 28 February 2026 00:42:33 +0000 (0:00:00.454) 0:00:17.365 ***** 2026-02-28 00:42:39.414885 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0e1c50ba-f800-4f3f-b273-e42be7614723) 2026-02-28 00:42:39.414891 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0e1c50ba-f800-4f3f-b273-e42be7614723) 2026-02-28 00:42:39.414897 | orchestrator | 2026-02-28 00:42:39.414903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:39.414910 | orchestrator | Saturday 28 February 2026 00:42:33 +0000 (0:00:00.448) 0:00:17.813 ***** 2026-02-28 00:42:39.414916 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e9cea570-02f9-4492-a688-e95ec43126f4) 2026-02-28 00:42:39.414922 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e9cea570-02f9-4492-a688-e95ec43126f4) 2026-02-28 00:42:39.414928 | orchestrator | 2026-02-28 00:42:39.414935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:39.414952 | orchestrator | Saturday 28 February 2026 00:42:34 +0000 (0:00:00.457) 0:00:18.271 ***** 2026-02-28 00:42:39.414959 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_dc966f00-bd76-481b-987a-91131c9d0b5a) 2026-02-28 00:42:39.414965 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_dc966f00-bd76-481b-987a-91131c9d0b5a) 2026-02-28 00:42:39.414972 | orchestrator | 2026-02-28 00:42:39.414982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:39.414989 | orchestrator | Saturday 28 February 2026 00:42:34 +0000 (0:00:00.704) 0:00:18.975 ***** 2026-02-28 00:42:39.414997 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:42:39.415004 | orchestrator | 2026-02-28 00:42:39.415011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:39.415018 | orchestrator | Saturday 28 February 2026 00:42:35 +0000 (0:00:00.355) 0:00:19.330 ***** 2026-02-28 00:42:39.415026 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-28 00:42:39.415056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-28 00:42:39.415064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-28 00:42:39.415071 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-28 00:42:39.415078 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-28 00:42:39.415085 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-28 00:42:39.415092 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-28 00:42:39.415100 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-28 00:42:39.415107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-28 00:42:39.415114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-28 00:42:39.415121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-28 00:42:39.415128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-28 00:42:39.415135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-28 00:42:39.415142 | orchestrator | 2026-02-28 00:42:39.415150 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:39.415157 | orchestrator | Saturday 28 February 2026 00:42:35 +0000 (0:00:00.480) 0:00:19.811 ***** 2026-02-28 00:42:39.415164 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.415171 | orchestrator | 2026-02-28 00:42:39.415178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:39.415185 | orchestrator | Saturday 28 February 2026 00:42:36 +0000 (0:00:00.773) 0:00:20.584 ***** 2026-02-28 00:42:39.415192 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.415200 | orchestrator | 2026-02-28 00:42:39.415207 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:39.415214 | orchestrator | Saturday 28 February 2026 00:42:36 +0000 (0:00:00.234) 0:00:20.819 ***** 2026-02-28 00:42:39.415221 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.415228 | orchestrator | 2026-02-28 00:42:39.415235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:39.415242 | orchestrator | Saturday 28 February 2026 00:42:36 +0000 (0:00:00.248) 0:00:21.068 ***** 2026-02-28 00:42:39.415249 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.415257 | orchestrator | 2026-02-28 00:42:39.415264 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:39.415271 | orchestrator | Saturday 28 February 2026 00:42:37 +0000 (0:00:00.314) 0:00:21.382 ***** 2026-02-28 00:42:39.415278 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.415285 | orchestrator | 2026-02-28 00:42:39.415293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:39.415300 | orchestrator | Saturday 28 February 2026 00:42:37 +0000 (0:00:00.245) 0:00:21.627 ***** 2026-02-28 00:42:39.415307 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.415314 | orchestrator | 2026-02-28 00:42:39.415321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:39.415329 | orchestrator | Saturday 28 February 2026 00:42:37 +0000 (0:00:00.239) 0:00:21.867 ***** 2026-02-28 00:42:39.415336 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.415343 | orchestrator | 2026-02-28 00:42:39.415350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:39.415357 | orchestrator | Saturday 28 February 2026 00:42:37 +0000 (0:00:00.210) 0:00:22.078 ***** 2026-02-28 00:42:39.415364 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:39.415374 | orchestrator | 2026-02-28 00:42:39.415381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:39.415387 | orchestrator | Saturday 28 February 2026 00:42:38 +0000 (0:00:00.235) 0:00:22.314 ***** 2026-02-28 00:42:39.415393 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-28 00:42:39.415401 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-28 00:42:39.415407 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-28 00:42:39.415414 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-28 00:42:39.415420 | orchestrator | 2026-02-28 00:42:39.415426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:39.415433 | orchestrator | Saturday 28 February 2026 00:42:39 +0000 (0:00:00.953) 0:00:23.267 ***** 2026-02-28 00:42:39.415439 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.453891 | orchestrator | 2026-02-28 00:42:46.453951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:46.453961 | orchestrator | Saturday 28 February 2026 00:42:39 +0000 (0:00:00.258) 0:00:23.526 ***** 2026-02-28 00:42:46.453968 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.453975 | orchestrator | 2026-02-28 00:42:46.453981 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:46.453999 | orchestrator | Saturday 28 February 2026 00:42:39 +0000 (0:00:00.299) 0:00:23.825 ***** 2026-02-28 00:42:46.454006 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.454012 | orchestrator | 2026-02-28 00:42:46.454088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:46.454096 | orchestrator | Saturday 28 February 2026 00:42:40 +0000 (0:00:00.307) 0:00:24.133 ***** 2026-02-28 00:42:46.454104 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.454110 | orchestrator | 2026-02-28 00:42:46.454117 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-28 00:42:46.454124 | orchestrator | Saturday 28 February 2026 00:42:40 +0000 (0:00:00.933) 0:00:25.066 ***** 2026-02-28 00:42:46.454131 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-28 00:42:46.454137 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-28 00:42:46.454144 | orchestrator | 2026-02-28 00:42:46.454150 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-28 00:42:46.454156 | orchestrator | Saturday 28 February 2026 00:42:41 +0000 (0:00:00.191) 0:00:25.258 ***** 2026-02-28 00:42:46.454163 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.454169 | orchestrator | 2026-02-28 00:42:46.454176 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-28 00:42:46.454183 | orchestrator | Saturday 28 February 2026 00:42:41 +0000 (0:00:00.191) 0:00:25.450 ***** 2026-02-28 00:42:46.454189 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.454195 | orchestrator | 2026-02-28 00:42:46.454202 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-28 00:42:46.454208 | orchestrator | Saturday 28 February 2026 00:42:41 +0000 (0:00:00.137) 0:00:25.587 ***** 2026-02-28 00:42:46.454215 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.454221 | orchestrator | 2026-02-28 00:42:46.454228 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-28 00:42:46.454235 | orchestrator | Saturday 28 February 2026 00:42:41 +0000 (0:00:00.150) 0:00:25.737 ***** 2026-02-28 00:42:46.454241 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:42:46.454248 | orchestrator | 2026-02-28 00:42:46.454254 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-28 00:42:46.454261 | orchestrator | Saturday 28 February 2026 00:42:41 +0000 (0:00:00.175) 0:00:25.913 ***** 2026-02-28 00:42:46.454268 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b073c23-7edc-573a-a84d-7267a4d3e426'}}) 2026-02-28 00:42:46.454275 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b30b5faa-3070-5965-91f3-7d8dbacf19e9'}}) 2026-02-28 00:42:46.454295 | orchestrator | 2026-02-28 00:42:46.454302 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-28 00:42:46.454310 | orchestrator | Saturday 28 February 2026 00:42:41 +0000 (0:00:00.196) 0:00:26.109 ***** 2026-02-28 00:42:46.454318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b073c23-7edc-573a-a84d-7267a4d3e426'}})  2026-02-28 00:42:46.454327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b30b5faa-3070-5965-91f3-7d8dbacf19e9'}})  2026-02-28 00:42:46.454334 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.454342 | orchestrator | 2026-02-28 00:42:46.454350 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-28 00:42:46.454357 | orchestrator | Saturday 28 February 2026 00:42:42 +0000 (0:00:00.146) 0:00:26.256 ***** 2026-02-28 00:42:46.454365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b073c23-7edc-573a-a84d-7267a4d3e426'}})  2026-02-28 00:42:46.454373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b30b5faa-3070-5965-91f3-7d8dbacf19e9'}})  2026-02-28 00:42:46.454380 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.454388 | orchestrator | 2026-02-28 00:42:46.454395 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-28 00:42:46.454403 | orchestrator | Saturday 28 February 2026 00:42:42 +0000 (0:00:00.151) 0:00:26.408 ***** 2026-02-28 00:42:46.454411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b073c23-7edc-573a-a84d-7267a4d3e426'}})  2026-02-28 00:42:46.454419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b30b5faa-3070-5965-91f3-7d8dbacf19e9'}})  2026-02-28 00:42:46.454427 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.454434 | orchestrator | 2026-02-28 00:42:46.454442 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-28 00:42:46.454449 | orchestrator | Saturday 28 February 2026 00:42:42 +0000 (0:00:00.143) 0:00:26.552 ***** 2026-02-28 00:42:46.454457 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:42:46.454464 | orchestrator | 2026-02-28 00:42:46.454472 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-28 00:42:46.454479 | orchestrator | Saturday 28 February 2026 00:42:42 +0000 (0:00:00.132) 0:00:26.685 ***** 2026-02-28 00:42:46.454487 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:42:46.454495 | orchestrator | 2026-02-28 00:42:46.454504 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-28 00:42:46.454511 | orchestrator | Saturday 28 February 2026 00:42:42 +0000 (0:00:00.149) 0:00:26.835 ***** 2026-02-28 00:42:46.454530 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.454537 | orchestrator | 2026-02-28 00:42:46.454543 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-28 00:42:46.454550 | orchestrator | Saturday 28 February 2026 00:42:43 +0000 (0:00:00.383) 0:00:27.218 ***** 2026-02-28 00:42:46.454558 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.454566 | orchestrator | 2026-02-28 00:42:46.454574 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-28 00:42:46.454583 | orchestrator | Saturday 28 February 2026 00:42:43 +0000 (0:00:00.129) 0:00:27.348 ***** 2026-02-28 00:42:46.454590 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.454599 | orchestrator | 2026-02-28 00:42:46.454607 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-28 00:42:46.454616 | orchestrator | Saturday 28 February 2026 00:42:43 +0000 (0:00:00.133) 0:00:27.481 ***** 2026-02-28 00:42:46.454624 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:42:46.454633 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:42:46.454642 | orchestrator |  "sdb": { 2026-02-28 00:42:46.454651 | orchestrator |  "osd_lvm_uuid": "7b073c23-7edc-573a-a84d-7267a4d3e426" 2026-02-28 00:42:46.454659 | orchestrator |  }, 2026-02-28 00:42:46.454674 | orchestrator |  "sdc": { 2026-02-28 00:42:46.454688 | orchestrator |  "osd_lvm_uuid": "b30b5faa-3070-5965-91f3-7d8dbacf19e9" 2026-02-28 00:42:46.454696 | orchestrator |  } 2026-02-28 00:42:46.454704 | orchestrator |  } 2026-02-28 00:42:46.454713 | orchestrator | } 2026-02-28 00:42:46.454722 | orchestrator | 2026-02-28 00:42:46.454730 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-28 00:42:46.454738 | orchestrator | Saturday 28 February 2026 00:42:43 +0000 (0:00:00.124) 0:00:27.606 ***** 2026-02-28 00:42:46.454746 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.454754 | orchestrator | 2026-02-28 00:42:46.454761 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-28 00:42:46.454769 | orchestrator | Saturday 28 February 2026 00:42:43 +0000 (0:00:00.134) 0:00:27.741 ***** 2026-02-28 00:42:46.454776 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.454784 | orchestrator | 2026-02-28 00:42:46.454790 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-28 00:42:46.454797 | orchestrator | Saturday 28 February 2026 00:42:43 +0000 (0:00:00.156) 0:00:27.897 ***** 2026-02-28 00:42:46.454804 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:46.454811 | orchestrator | 2026-02-28 00:42:46.454817 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-28 00:42:46.454824 | orchestrator | Saturday 28 February 2026 00:42:43 +0000 (0:00:00.139) 0:00:28.037 ***** 2026-02-28 00:42:46.454832 | orchestrator | changed: [testbed-node-4] => { 2026-02-28 00:42:46.454839 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-28 00:42:46.454846 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:42:46.454853 | orchestrator |  "sdb": { 2026-02-28 00:42:46.454864 | orchestrator |  "osd_lvm_uuid": "7b073c23-7edc-573a-a84d-7267a4d3e426" 2026-02-28 00:42:46.454871 | orchestrator |  }, 2026-02-28 00:42:46.454877 | orchestrator |  "sdc": { 2026-02-28 00:42:46.454884 | orchestrator |  "osd_lvm_uuid": "b30b5faa-3070-5965-91f3-7d8dbacf19e9" 2026-02-28 00:42:46.454891 | orchestrator |  } 2026-02-28 00:42:46.454898 | orchestrator |  }, 2026-02-28 00:42:46.454905 | orchestrator |  "lvm_volumes": [ 2026-02-28 00:42:46.454912 | orchestrator |  { 2026-02-28 00:42:46.454919 | orchestrator |  "data": "osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426", 2026-02-28 00:42:46.454927 | orchestrator |  "data_vg": "ceph-7b073c23-7edc-573a-a84d-7267a4d3e426" 2026-02-28 00:42:46.454934 | orchestrator |  }, 2026-02-28 00:42:46.454941 | orchestrator |  { 2026-02-28 00:42:46.454948 | orchestrator |  "data": "osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9", 2026-02-28 00:42:46.454955 | orchestrator |  "data_vg": "ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9" 2026-02-28 00:42:46.454962 | orchestrator |  } 2026-02-28 00:42:46.454969 | orchestrator |  ] 2026-02-28 00:42:46.454976 | orchestrator |  } 2026-02-28 00:42:46.454983 | orchestrator | } 2026-02-28 00:42:46.454990 | orchestrator | 2026-02-28 00:42:46.454997 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-28 00:42:46.455004 | orchestrator | Saturday 28 February 2026 00:42:44 +0000 (0:00:00.254) 0:00:28.291 ***** 2026-02-28 00:42:46.455011 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-28 00:42:46.455017 | orchestrator | 2026-02-28 00:42:46.455024 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-28 00:42:46.455042 | orchestrator | 2026-02-28 00:42:46.455050 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:42:46.455056 | orchestrator | Saturday 28 February 2026 00:42:45 +0000 (0:00:01.193) 0:00:29.485 ***** 2026-02-28 00:42:46.455063 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-28 00:42:46.455069 | orchestrator | 2026-02-28 00:42:46.455076 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:42:46.455089 | orchestrator | Saturday 28 February 2026 00:42:45 +0000 (0:00:00.543) 0:00:30.028 ***** 2026-02-28 00:42:46.455096 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:42:46.455103 | orchestrator | 2026-02-28 00:42:46.455110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:46.455117 | orchestrator | Saturday 28 February 2026 00:42:46 +0000 (0:00:00.200) 0:00:30.229 ***** 2026-02-28 00:42:46.455124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-28 00:42:46.455131 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-28 00:42:46.455137 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-28 00:42:46.455144 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-28 00:42:46.455151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-28 00:42:46.455165 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-28 00:42:53.422539 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-28 00:42:53.422624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-28 00:42:53.422638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-28 00:42:53.422648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-28 00:42:53.422658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-28 00:42:53.422668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-28 00:42:53.422677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-28 00:42:53.422687 | orchestrator | 2026-02-28 00:42:53.422698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:53.422708 | orchestrator | Saturday 28 February 2026 00:42:46 +0000 (0:00:00.332) 0:00:30.561 ***** 2026-02-28 00:42:53.422718 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.422728 | orchestrator | 2026-02-28 00:42:53.422738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:53.422747 | orchestrator | Saturday 28 February 2026 00:42:46 +0000 (0:00:00.186) 0:00:30.748 ***** 2026-02-28 00:42:53.422757 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.422766 | orchestrator | 2026-02-28 00:42:53.422776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:53.422786 | orchestrator | Saturday 28 February 2026 00:42:46 +0000 (0:00:00.151) 0:00:30.899 ***** 2026-02-28 00:42:53.422795 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.422805 | orchestrator | 2026-02-28 00:42:53.422814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:53.422824 | orchestrator | Saturday 28 February 2026 00:42:46 +0000 (0:00:00.151) 0:00:31.051 ***** 2026-02-28 00:42:53.422834 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.422843 | orchestrator | 2026-02-28 00:42:53.422852 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:53.422862 | orchestrator | Saturday 28 February 2026 00:42:47 +0000 (0:00:00.143) 0:00:31.194 ***** 2026-02-28 00:42:53.422872 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.422881 | orchestrator | 2026-02-28 00:42:53.422891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:53.422900 | orchestrator | Saturday 28 February 2026 00:42:47 +0000 (0:00:00.156) 0:00:31.350 ***** 2026-02-28 00:42:53.422910 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.422919 | orchestrator | 2026-02-28 00:42:53.422944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:53.422954 | orchestrator | Saturday 28 February 2026 00:42:47 +0000 (0:00:00.146) 0:00:31.496 ***** 2026-02-28 00:42:53.422981 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.422992 | orchestrator | 2026-02-28 00:42:53.423001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:53.423011 | orchestrator | Saturday 28 February 2026 00:42:47 +0000 (0:00:00.143) 0:00:31.640 ***** 2026-02-28 00:42:53.423020 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.423080 | orchestrator | 2026-02-28 00:42:53.423092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:53.423103 | orchestrator | Saturday 28 February 2026 00:42:47 +0000 (0:00:00.141) 0:00:31.782 ***** 2026-02-28 00:42:53.423115 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd) 2026-02-28 00:42:53.423127 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd) 2026-02-28 00:42:53.423137 | orchestrator | 2026-02-28 00:42:53.423149 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:53.423161 | orchestrator | Saturday 28 February 2026 00:42:48 +0000 (0:00:00.651) 0:00:32.433 ***** 2026-02-28 00:42:53.423172 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2e8d751a-8ce2-4e55-9ca5-cbc1ced8bcd0) 2026-02-28 00:42:53.423182 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2e8d751a-8ce2-4e55-9ca5-cbc1ced8bcd0) 2026-02-28 00:42:53.423194 | orchestrator | 2026-02-28 00:42:53.423205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:53.423216 | orchestrator | Saturday 28 February 2026 00:42:48 +0000 (0:00:00.406) 0:00:32.840 ***** 2026-02-28 00:42:53.423227 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_35a5842a-f5e3-41cd-9ad4-9887af65562b) 2026-02-28 00:42:53.423239 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_35a5842a-f5e3-41cd-9ad4-9887af65562b) 2026-02-28 00:42:53.423250 | orchestrator | 2026-02-28 00:42:53.423260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:53.423270 | orchestrator | Saturday 28 February 2026 00:42:49 +0000 (0:00:00.388) 0:00:33.229 ***** 2026-02-28 00:42:53.423280 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_794d6bd6-cdc9-465f-9345-dcdc45cdec57) 2026-02-28 00:42:53.423290 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_794d6bd6-cdc9-465f-9345-dcdc45cdec57) 2026-02-28 00:42:53.423300 | orchestrator | 2026-02-28 00:42:53.423309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:53.423319 | orchestrator | Saturday 28 February 2026 00:42:49 +0000 (0:00:00.434) 0:00:33.663 ***** 2026-02-28 00:42:53.423328 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:42:53.423338 | orchestrator | 2026-02-28 00:42:53.423348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:53.423373 | orchestrator | Saturday 28 February 2026 00:42:49 +0000 (0:00:00.336) 0:00:34.000 ***** 2026-02-28 00:42:53.423384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-28 00:42:53.423393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-28 00:42:53.423403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-28 00:42:53.423413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-28 00:42:53.423422 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-28 00:42:53.423432 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-28 00:42:53.423447 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-28 00:42:53.423464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-28 00:42:53.423482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-28 00:42:53.423492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-28 00:42:53.423502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-28 00:42:53.423511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-28 00:42:53.423521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-28 00:42:53.423531 | orchestrator | 2026-02-28 00:42:53.423540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:53.423550 | orchestrator | Saturday 28 February 2026 00:42:50 +0000 (0:00:00.341) 0:00:34.342 ***** 2026-02-28 00:42:53.423560 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.423569 | orchestrator | 2026-02-28 00:42:53.423579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:53.423588 | orchestrator | Saturday 28 February 2026 00:42:50 +0000 (0:00:00.182) 0:00:34.525 ***** 2026-02-28 00:42:53.423598 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.423608 | orchestrator | 2026-02-28 00:42:53.423617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:53.423627 | orchestrator | Saturday 28 February 2026 00:42:50 +0000 (0:00:00.187) 0:00:34.712 ***** 2026-02-28 00:42:53.423637 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.423647 | orchestrator | 2026-02-28 00:42:53.423656 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:53.423666 | orchestrator | Saturday 28 February 2026 00:42:50 +0000 (0:00:00.176) 0:00:34.889 ***** 2026-02-28 00:42:53.423676 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.423685 | orchestrator | 2026-02-28 00:42:53.423695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:53.423704 | orchestrator | Saturday 28 February 2026 00:42:50 +0000 (0:00:00.176) 0:00:35.066 ***** 2026-02-28 00:42:53.423714 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.423724 | orchestrator | 2026-02-28 00:42:53.423733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:53.423743 | orchestrator | Saturday 28 February 2026 00:42:51 +0000 (0:00:00.193) 0:00:35.259 ***** 2026-02-28 00:42:53.423753 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.423762 | orchestrator | 2026-02-28 00:42:53.423772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:53.423781 | orchestrator | Saturday 28 February 2026 00:42:51 +0000 (0:00:00.517) 0:00:35.777 ***** 2026-02-28 00:42:53.423791 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.423801 | orchestrator | 2026-02-28 00:42:53.423810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:53.423820 | orchestrator | Saturday 28 February 2026 00:42:51 +0000 (0:00:00.210) 0:00:35.987 ***** 2026-02-28 00:42:53.423829 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.423839 | orchestrator | 2026-02-28 00:42:53.423849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:53.423858 | orchestrator | Saturday 28 February 2026 00:42:52 +0000 (0:00:00.224) 0:00:36.212 ***** 2026-02-28 00:42:53.423868 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-28 00:42:53.423878 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-28 00:42:53.423887 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-28 00:42:53.423897 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-28 00:42:53.423906 | orchestrator | 2026-02-28 00:42:53.423916 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:53.423926 | orchestrator | Saturday 28 February 2026 00:42:52 +0000 (0:00:00.582) 0:00:36.795 ***** 2026-02-28 00:42:53.423936 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.423945 | orchestrator | 2026-02-28 00:42:53.423960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:53.423976 | orchestrator | Saturday 28 February 2026 00:42:52 +0000 (0:00:00.184) 0:00:36.979 ***** 2026-02-28 00:42:53.423986 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.423996 | orchestrator | 2026-02-28 00:42:53.424005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:53.424015 | orchestrator | Saturday 28 February 2026 00:42:53 +0000 (0:00:00.197) 0:00:37.176 ***** 2026-02-28 00:42:53.424025 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.424050 | orchestrator | 2026-02-28 00:42:53.424060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:53.424070 | orchestrator | Saturday 28 February 2026 00:42:53 +0000 (0:00:00.175) 0:00:37.352 ***** 2026-02-28 00:42:53.424079 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:53.424089 | orchestrator | 2026-02-28 00:42:53.424105 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-28 00:42:57.891687 | orchestrator | Saturday 28 February 2026 00:42:53 +0000 (0:00:00.180) 0:00:37.532 ***** 2026-02-28 00:42:57.891796 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-28 00:42:57.891812 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-28 00:42:57.891824 | orchestrator | 2026-02-28 00:42:57.891836 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-28 00:42:57.891847 | orchestrator | Saturday 28 February 2026 00:42:53 +0000 (0:00:00.168) 0:00:37.700 ***** 2026-02-28 00:42:57.891858 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:57.891870 | orchestrator | 2026-02-28 00:42:57.891881 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-28 00:42:57.891892 | orchestrator | Saturday 28 February 2026 00:42:53 +0000 (0:00:00.132) 0:00:37.833 ***** 2026-02-28 00:42:57.891903 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:57.891914 | orchestrator | 2026-02-28 00:42:57.891925 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-28 00:42:57.891936 | orchestrator | Saturday 28 February 2026 00:42:53 +0000 (0:00:00.163) 0:00:37.996 ***** 2026-02-28 00:42:57.891946 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:57.891957 | orchestrator | 2026-02-28 00:42:57.891968 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-28 00:42:57.891979 | orchestrator | Saturday 28 February 2026 00:42:54 +0000 (0:00:00.374) 0:00:38.371 ***** 2026-02-28 00:42:57.891989 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:42:57.892001 | orchestrator | 2026-02-28 00:42:57.892012 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-28 00:42:57.892024 | orchestrator | Saturday 28 February 2026 00:42:54 +0000 (0:00:00.153) 0:00:38.524 ***** 2026-02-28 00:42:57.892122 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f012bc14-1358-5d7b-888e-596399f0a0b7'}}) 2026-02-28 00:42:57.892137 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'de70aebc-f344-5246-8655-326adc55aaa0'}}) 2026-02-28 00:42:57.892148 | orchestrator | 2026-02-28 00:42:57.892159 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-28 00:42:57.892170 | orchestrator | Saturday 28 February 2026 00:42:54 +0000 (0:00:00.173) 0:00:38.698 ***** 2026-02-28 00:42:57.892182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f012bc14-1358-5d7b-888e-596399f0a0b7'}})  2026-02-28 00:42:57.892212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'de70aebc-f344-5246-8655-326adc55aaa0'}})  2026-02-28 00:42:57.892226 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:57.892239 | orchestrator | 2026-02-28 00:42:57.892251 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-28 00:42:57.892263 | orchestrator | Saturday 28 February 2026 00:42:54 +0000 (0:00:00.158) 0:00:38.856 ***** 2026-02-28 00:42:57.892275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f012bc14-1358-5d7b-888e-596399f0a0b7'}})  2026-02-28 00:42:57.892308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'de70aebc-f344-5246-8655-326adc55aaa0'}})  2026-02-28 00:42:57.892321 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:57.892333 | orchestrator | 2026-02-28 00:42:57.892347 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-28 00:42:57.892359 | orchestrator | Saturday 28 February 2026 00:42:54 +0000 (0:00:00.160) 0:00:39.017 ***** 2026-02-28 00:42:57.892371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f012bc14-1358-5d7b-888e-596399f0a0b7'}})  2026-02-28 00:42:57.892383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'de70aebc-f344-5246-8655-326adc55aaa0'}})  2026-02-28 00:42:57.892396 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:57.892408 | orchestrator | 2026-02-28 00:42:57.892420 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-28 00:42:57.892432 | orchestrator | Saturday 28 February 2026 00:42:55 +0000 (0:00:00.156) 0:00:39.173 ***** 2026-02-28 00:42:57.892444 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:42:57.892457 | orchestrator | 2026-02-28 00:42:57.892469 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-28 00:42:57.892481 | orchestrator | Saturday 28 February 2026 00:42:55 +0000 (0:00:00.147) 0:00:39.321 ***** 2026-02-28 00:42:57.892493 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:42:57.892506 | orchestrator | 2026-02-28 00:42:57.892517 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-28 00:42:57.892529 | orchestrator | Saturday 28 February 2026 00:42:55 +0000 (0:00:00.152) 0:00:39.473 ***** 2026-02-28 00:42:57.892542 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:57.892553 | orchestrator | 2026-02-28 00:42:57.892566 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-28 00:42:57.892577 | orchestrator | Saturday 28 February 2026 00:42:55 +0000 (0:00:00.142) 0:00:39.616 ***** 2026-02-28 00:42:57.892588 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:57.892598 | orchestrator | 2026-02-28 00:42:57.892609 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-28 00:42:57.892620 | orchestrator | Saturday 28 February 2026 00:42:55 +0000 (0:00:00.159) 0:00:39.775 ***** 2026-02-28 00:42:57.892631 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:57.892641 | orchestrator | 2026-02-28 00:42:57.892652 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-28 00:42:57.892663 | orchestrator | Saturday 28 February 2026 00:42:55 +0000 (0:00:00.143) 0:00:39.918 ***** 2026-02-28 00:42:57.892673 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:42:57.892684 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:42:57.892695 | orchestrator |  "sdb": { 2026-02-28 00:42:57.892724 | orchestrator |  "osd_lvm_uuid": "f012bc14-1358-5d7b-888e-596399f0a0b7" 2026-02-28 00:42:57.892736 | orchestrator |  }, 2026-02-28 00:42:57.892747 | orchestrator |  "sdc": { 2026-02-28 00:42:57.892758 | orchestrator |  "osd_lvm_uuid": "de70aebc-f344-5246-8655-326adc55aaa0" 2026-02-28 00:42:57.892769 | orchestrator |  } 2026-02-28 00:42:57.892780 | orchestrator |  } 2026-02-28 00:42:57.892791 | orchestrator | } 2026-02-28 00:42:57.892801 | orchestrator | 2026-02-28 00:42:57.892812 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-28 00:42:57.892823 | orchestrator | Saturday 28 February 2026 00:42:55 +0000 (0:00:00.152) 0:00:40.071 ***** 2026-02-28 00:42:57.892834 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:57.892845 | orchestrator | 2026-02-28 00:42:57.892856 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-28 00:42:57.892867 | orchestrator | Saturday 28 February 2026 00:42:56 +0000 (0:00:00.386) 0:00:40.457 ***** 2026-02-28 00:42:57.892878 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:57.892896 | orchestrator | 2026-02-28 00:42:57.892907 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-28 00:42:57.892918 | orchestrator | Saturday 28 February 2026 00:42:56 +0000 (0:00:00.138) 0:00:40.596 ***** 2026-02-28 00:42:57.892928 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:57.892939 | orchestrator | 2026-02-28 00:42:57.892950 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-28 00:42:57.892960 | orchestrator | Saturday 28 February 2026 00:42:56 +0000 (0:00:00.139) 0:00:40.735 ***** 2026-02-28 00:42:57.892984 | orchestrator | changed: [testbed-node-5] => { 2026-02-28 00:42:57.892995 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-28 00:42:57.893006 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:42:57.893017 | orchestrator |  "sdb": { 2026-02-28 00:42:57.893028 | orchestrator |  "osd_lvm_uuid": "f012bc14-1358-5d7b-888e-596399f0a0b7" 2026-02-28 00:42:57.893063 | orchestrator |  }, 2026-02-28 00:42:57.893074 | orchestrator |  "sdc": { 2026-02-28 00:42:57.893085 | orchestrator |  "osd_lvm_uuid": "de70aebc-f344-5246-8655-326adc55aaa0" 2026-02-28 00:42:57.893096 | orchestrator |  } 2026-02-28 00:42:57.893106 | orchestrator |  }, 2026-02-28 00:42:57.893117 | orchestrator |  "lvm_volumes": [ 2026-02-28 00:42:57.893128 | orchestrator |  { 2026-02-28 00:42:57.893139 | orchestrator |  "data": "osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7", 2026-02-28 00:42:57.893150 | orchestrator |  "data_vg": "ceph-f012bc14-1358-5d7b-888e-596399f0a0b7" 2026-02-28 00:42:57.893161 | orchestrator |  }, 2026-02-28 00:42:57.893172 | orchestrator |  { 2026-02-28 00:42:57.893182 | orchestrator |  "data": "osd-block-de70aebc-f344-5246-8655-326adc55aaa0", 2026-02-28 00:42:57.893202 | orchestrator |  "data_vg": "ceph-de70aebc-f344-5246-8655-326adc55aaa0" 2026-02-28 00:42:57.893213 | orchestrator |  } 2026-02-28 00:42:57.893225 | orchestrator |  ] 2026-02-28 00:42:57.893274 | orchestrator |  } 2026-02-28 00:42:57.893287 | orchestrator | } 2026-02-28 00:42:57.893298 | orchestrator | 2026-02-28 00:42:57.893308 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-28 00:42:57.893319 | orchestrator | Saturday 28 February 2026 00:42:56 +0000 (0:00:00.215) 0:00:40.951 ***** 2026-02-28 00:42:57.893330 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-28 00:42:57.893341 | orchestrator | 2026-02-28 00:42:57.893352 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:42:57.893364 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 00:42:57.893376 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 00:42:57.893387 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 00:42:57.893397 | orchestrator | 2026-02-28 00:42:57.893408 | orchestrator | 2026-02-28 00:42:57.893419 | orchestrator | 2026-02-28 00:42:57.893430 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:42:57.893440 | orchestrator | Saturday 28 February 2026 00:42:57 +0000 (0:00:01.034) 0:00:41.986 ***** 2026-02-28 00:42:57.893451 | orchestrator | =============================================================================== 2026-02-28 00:42:57.893464 | orchestrator | Write configuration file ------------------------------------------------ 3.76s 2026-02-28 00:42:57.893482 | orchestrator | Add known partitions to the list of available block devices ------------- 1.20s 2026-02-28 00:42:57.893501 | orchestrator | Add known links to the list of available block devices ------------------ 1.10s 2026-02-28 00:42:57.893517 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2026-02-28 00:42:57.893548 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.03s 2026-02-28 00:42:57.893567 | orchestrator | Add known partitions to the list of available block devices ------------- 0.95s 2026-02-28 00:42:57.893585 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-02-28 00:42:57.893604 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2026-02-28 00:42:57.893622 | orchestrator | Print configuration data ------------------------------------------------ 0.78s 2026-02-28 00:42:57.893639 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-02-28 00:42:57.893656 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-02-28 00:42:57.893672 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.68s 2026-02-28 00:42:57.893689 | orchestrator | Set DB devices config data ---------------------------------------------- 0.67s 2026-02-28 00:42:57.893718 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.66s 2026-02-28 00:42:58.298198 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-02-28 00:42:58.298323 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-02-28 00:42:58.298343 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-02-28 00:42:58.298355 | orchestrator | Print WAL devices ------------------------------------------------------- 0.64s 2026-02-28 00:42:58.298366 | orchestrator | Get initial list of available block devices ----------------------------- 0.63s 2026-02-28 00:42:58.298377 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2026-02-28 00:43:20.823517 | orchestrator | 2026-02-28 00:43:20 | INFO  | Task 037a1dcc-c6a1-49dd-82c2-2bd828025d6a (sync inventory) is running in background. Output coming soon. 2026-02-28 00:43:49.043468 | orchestrator | 2026-02-28 00:43:22 | INFO  | Starting group_vars file reorganization 2026-02-28 00:43:49.043609 | orchestrator | 2026-02-28 00:43:22 | INFO  | Moved 0 file(s) to their respective directories 2026-02-28 00:43:49.043624 | orchestrator | 2026-02-28 00:43:22 | INFO  | Group_vars file reorganization completed 2026-02-28 00:43:49.043636 | orchestrator | 2026-02-28 00:43:25 | INFO  | Starting variable preparation from inventory 2026-02-28 00:43:49.043646 | orchestrator | 2026-02-28 00:43:28 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-28 00:43:49.043657 | orchestrator | 2026-02-28 00:43:28 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-28 00:43:49.043667 | orchestrator | 2026-02-28 00:43:28 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-28 00:43:49.043677 | orchestrator | 2026-02-28 00:43:28 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-28 00:43:49.043687 | orchestrator | 2026-02-28 00:43:28 | INFO  | Variable preparation completed 2026-02-28 00:43:49.043697 | orchestrator | 2026-02-28 00:43:30 | INFO  | Starting inventory overwrite handling 2026-02-28 00:43:49.043708 | orchestrator | 2026-02-28 00:43:30 | INFO  | Handling group overwrites in 99-overwrite 2026-02-28 00:43:49.043718 | orchestrator | 2026-02-28 00:43:30 | INFO  | Removing group frr:children from 60-generic 2026-02-28 00:43:49.043729 | orchestrator | 2026-02-28 00:43:30 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-28 00:43:49.043739 | orchestrator | 2026-02-28 00:43:30 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-28 00:43:49.043749 | orchestrator | 2026-02-28 00:43:30 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-28 00:43:49.043759 | orchestrator | 2026-02-28 00:43:30 | INFO  | Handling group overwrites in 20-roles 2026-02-28 00:43:49.043770 | orchestrator | 2026-02-28 00:43:30 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-28 00:43:49.043812 | orchestrator | 2026-02-28 00:43:30 | INFO  | Removed 5 group(s) in total 2026-02-28 00:43:49.043822 | orchestrator | 2026-02-28 00:43:30 | INFO  | Inventory overwrite handling completed 2026-02-28 00:43:49.043832 | orchestrator | 2026-02-28 00:43:31 | INFO  | Starting merge of inventory files 2026-02-28 00:43:49.043842 | orchestrator | 2026-02-28 00:43:31 | INFO  | Inventory files merged successfully 2026-02-28 00:43:49.043852 | orchestrator | 2026-02-28 00:43:36 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-28 00:43:49.043862 | orchestrator | 2026-02-28 00:43:47 | INFO  | Successfully wrote ClusterShell configuration 2026-02-28 00:43:49.043873 | orchestrator | [master db8bb59] 2026-02-28-00-43 2026-02-28 00:43:49.043884 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-28 00:43:51.595299 | orchestrator | 2026-02-28 00:43:51 | INFO  | Task d991bef8-b417-4bb7-bac3-bdc9a637bd41 (ceph-create-lvm-devices) was prepared for execution. 2026-02-28 00:43:51.595428 | orchestrator | 2026-02-28 00:43:51 | INFO  | It takes a moment until task d991bef8-b417-4bb7-bac3-bdc9a637bd41 (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-28 00:44:05.556279 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-28 00:44:05.556372 | orchestrator | 2.16.14 2026-02-28 00:44:05.556389 | orchestrator | 2026-02-28 00:44:05.556402 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-28 00:44:05.556414 | orchestrator | 2026-02-28 00:44:05.556425 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:44:05.556437 | orchestrator | Saturday 28 February 2026 00:43:57 +0000 (0:00:00.422) 0:00:00.422 ***** 2026-02-28 00:44:05.556448 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-28 00:44:05.556459 | orchestrator | 2026-02-28 00:44:05.556469 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:44:05.556480 | orchestrator | Saturday 28 February 2026 00:43:58 +0000 (0:00:00.281) 0:00:00.703 ***** 2026-02-28 00:44:05.556491 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:05.556502 | orchestrator | 2026-02-28 00:44:05.556513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:05.556525 | orchestrator | Saturday 28 February 2026 00:43:58 +0000 (0:00:00.250) 0:00:00.954 ***** 2026-02-28 00:44:05.556537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-28 00:44:05.556548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-28 00:44:05.556559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-28 00:44:05.556570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-28 00:44:05.556580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-28 00:44:05.556616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-28 00:44:05.556635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-28 00:44:05.556655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-28 00:44:05.556674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-28 00:44:05.556700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-28 00:44:05.556712 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-28 00:44:05.556722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-28 00:44:05.556733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-28 00:44:05.556799 | orchestrator | 2026-02-28 00:44:05.556817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:05.556836 | orchestrator | Saturday 28 February 2026 00:43:58 +0000 (0:00:00.558) 0:00:01.512 ***** 2026-02-28 00:44:05.556856 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:05.556874 | orchestrator | 2026-02-28 00:44:05.556893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:05.556912 | orchestrator | Saturday 28 February 2026 00:43:59 +0000 (0:00:00.240) 0:00:01.752 ***** 2026-02-28 00:44:05.556930 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:05.556949 | orchestrator | 2026-02-28 00:44:05.556969 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:05.556995 | orchestrator | Saturday 28 February 2026 00:43:59 +0000 (0:00:00.240) 0:00:01.993 ***** 2026-02-28 00:44:05.557037 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:05.557058 | orchestrator | 2026-02-28 00:44:05.557077 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:05.557089 | orchestrator | Saturday 28 February 2026 00:43:59 +0000 (0:00:00.208) 0:00:02.201 ***** 2026-02-28 00:44:05.557100 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:05.557111 | orchestrator | 2026-02-28 00:44:05.557122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:05.557132 | orchestrator | Saturday 28 February 2026 00:43:59 +0000 (0:00:00.204) 0:00:02.406 ***** 2026-02-28 00:44:05.557143 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:05.557154 | orchestrator | 2026-02-28 00:44:05.557165 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:05.557176 | orchestrator | Saturday 28 February 2026 00:43:59 +0000 (0:00:00.226) 0:00:02.632 ***** 2026-02-28 00:44:05.557186 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:05.557197 | orchestrator | 2026-02-28 00:44:05.557208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:05.557219 | orchestrator | Saturday 28 February 2026 00:44:00 +0000 (0:00:00.275) 0:00:02.908 ***** 2026-02-28 00:44:05.557229 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:05.557240 | orchestrator | 2026-02-28 00:44:05.557251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:05.557262 | orchestrator | Saturday 28 February 2026 00:44:00 +0000 (0:00:00.241) 0:00:03.149 ***** 2026-02-28 00:44:05.557272 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:05.557283 | orchestrator | 2026-02-28 00:44:05.557294 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:05.557305 | orchestrator | Saturday 28 February 2026 00:44:00 +0000 (0:00:00.247) 0:00:03.397 ***** 2026-02-28 00:44:05.557315 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5) 2026-02-28 00:44:05.557327 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5) 2026-02-28 00:44:05.557338 | orchestrator | 2026-02-28 00:44:05.557349 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:05.557389 | orchestrator | Saturday 28 February 2026 00:44:01 +0000 (0:00:00.427) 0:00:03.825 ***** 2026-02-28 00:44:05.557409 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5a576e70-544d-44fd-a16d-0d3a23dfbf81) 2026-02-28 00:44:05.557425 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5a576e70-544d-44fd-a16d-0d3a23dfbf81) 2026-02-28 00:44:05.557436 | orchestrator | 2026-02-28 00:44:05.557447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:05.557458 | orchestrator | Saturday 28 February 2026 00:44:01 +0000 (0:00:00.719) 0:00:04.544 ***** 2026-02-28 00:44:05.557468 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_72888dc0-89fa-4d82-a9e9-f7d921f86abf) 2026-02-28 00:44:05.557479 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_72888dc0-89fa-4d82-a9e9-f7d921f86abf) 2026-02-28 00:44:05.557500 | orchestrator | 2026-02-28 00:44:05.557511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:05.557522 | orchestrator | Saturday 28 February 2026 00:44:02 +0000 (0:00:00.826) 0:00:05.370 ***** 2026-02-28 00:44:05.557533 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7a83ed65-2ee8-47d4-9c51-9fbd7e5801de) 2026-02-28 00:44:05.557544 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7a83ed65-2ee8-47d4-9c51-9fbd7e5801de) 2026-02-28 00:44:05.557554 | orchestrator | 2026-02-28 00:44:05.557565 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:05.557576 | orchestrator | Saturday 28 February 2026 00:44:03 +0000 (0:00:00.734) 0:00:06.105 ***** 2026-02-28 00:44:05.557586 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:44:05.557597 | orchestrator | 2026-02-28 00:44:05.557608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:05.557619 | orchestrator | Saturday 28 February 2026 00:44:03 +0000 (0:00:00.328) 0:00:06.433 ***** 2026-02-28 00:44:05.557630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-28 00:44:05.557655 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-28 00:44:05.557667 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-28 00:44:05.557688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-28 00:44:05.557699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-28 00:44:05.557710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-28 00:44:05.557721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-28 00:44:05.557731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-28 00:44:05.557742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-28 00:44:05.557752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-28 00:44:05.557763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-28 00:44:05.557774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-28 00:44:05.557785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-28 00:44:05.557795 | orchestrator | 2026-02-28 00:44:05.557806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:05.557817 | orchestrator | Saturday 28 February 2026 00:44:04 +0000 (0:00:00.384) 0:00:06.818 ***** 2026-02-28 00:44:05.557828 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:05.557839 | orchestrator | 2026-02-28 00:44:05.557849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:05.557860 | orchestrator | Saturday 28 February 2026 00:44:04 +0000 (0:00:00.164) 0:00:06.982 ***** 2026-02-28 00:44:05.557871 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:05.557882 | orchestrator | 2026-02-28 00:44:05.557892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:05.557903 | orchestrator | Saturday 28 February 2026 00:44:04 +0000 (0:00:00.225) 0:00:07.207 ***** 2026-02-28 00:44:05.557914 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:05.557925 | orchestrator | 2026-02-28 00:44:05.557935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:05.557946 | orchestrator | Saturday 28 February 2026 00:44:04 +0000 (0:00:00.219) 0:00:07.426 ***** 2026-02-28 00:44:05.557957 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:05.557973 | orchestrator | 2026-02-28 00:44:05.557984 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:05.557995 | orchestrator | Saturday 28 February 2026 00:44:04 +0000 (0:00:00.217) 0:00:07.644 ***** 2026-02-28 00:44:05.558006 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:05.558144 | orchestrator | 2026-02-28 00:44:05.558158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:05.558169 | orchestrator | Saturday 28 February 2026 00:44:05 +0000 (0:00:00.203) 0:00:07.848 ***** 2026-02-28 00:44:05.558180 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:05.558191 | orchestrator | 2026-02-28 00:44:05.558202 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:05.558213 | orchestrator | Saturday 28 February 2026 00:44:05 +0000 (0:00:00.181) 0:00:08.029 ***** 2026-02-28 00:44:05.558224 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:05.558235 | orchestrator | 2026-02-28 00:44:05.558254 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:13.769185 | orchestrator | Saturday 28 February 2026 00:44:05 +0000 (0:00:00.186) 0:00:08.216 ***** 2026-02-28 00:44:13.769274 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.769289 | orchestrator | 2026-02-28 00:44:13.769302 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:13.769313 | orchestrator | Saturday 28 February 2026 00:44:05 +0000 (0:00:00.197) 0:00:08.413 ***** 2026-02-28 00:44:13.769324 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-28 00:44:13.769335 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-28 00:44:13.769346 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-28 00:44:13.769357 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-28 00:44:13.769368 | orchestrator | 2026-02-28 00:44:13.769379 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:13.769390 | orchestrator | Saturday 28 February 2026 00:44:06 +0000 (0:00:00.925) 0:00:09.339 ***** 2026-02-28 00:44:13.769400 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.769411 | orchestrator | 2026-02-28 00:44:13.769422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:13.769433 | orchestrator | Saturday 28 February 2026 00:44:06 +0000 (0:00:00.191) 0:00:09.530 ***** 2026-02-28 00:44:13.769444 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.769454 | orchestrator | 2026-02-28 00:44:13.769465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:13.769476 | orchestrator | Saturday 28 February 2026 00:44:07 +0000 (0:00:00.243) 0:00:09.774 ***** 2026-02-28 00:44:13.769488 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.769499 | orchestrator | 2026-02-28 00:44:13.769509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:13.769520 | orchestrator | Saturday 28 February 2026 00:44:07 +0000 (0:00:00.200) 0:00:09.974 ***** 2026-02-28 00:44:13.769531 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.769542 | orchestrator | 2026-02-28 00:44:13.769553 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-28 00:44:13.769564 | orchestrator | Saturday 28 February 2026 00:44:07 +0000 (0:00:00.231) 0:00:10.206 ***** 2026-02-28 00:44:13.769574 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.769585 | orchestrator | 2026-02-28 00:44:13.769596 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-28 00:44:13.769607 | orchestrator | Saturday 28 February 2026 00:44:07 +0000 (0:00:00.143) 0:00:10.349 ***** 2026-02-28 00:44:13.769635 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '867868d0-bc68-54b2-8c81-3bd5cfa2d741'}}) 2026-02-28 00:44:13.769647 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee950762-4564-5222-9e83-52313bf46222'}}) 2026-02-28 00:44:13.769658 | orchestrator | 2026-02-28 00:44:13.769670 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-28 00:44:13.769700 | orchestrator | Saturday 28 February 2026 00:44:07 +0000 (0:00:00.207) 0:00:10.557 ***** 2026-02-28 00:44:13.769725 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'}) 2026-02-28 00:44:13.769747 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'}) 2026-02-28 00:44:13.769764 | orchestrator | 2026-02-28 00:44:13.769777 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-28 00:44:13.769795 | orchestrator | Saturday 28 February 2026 00:44:09 +0000 (0:00:02.001) 0:00:12.558 ***** 2026-02-28 00:44:13.769808 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:13.769822 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:13.769835 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.769847 | orchestrator | 2026-02-28 00:44:13.769859 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-28 00:44:13.769878 | orchestrator | Saturday 28 February 2026 00:44:10 +0000 (0:00:00.152) 0:00:12.711 ***** 2026-02-28 00:44:13.769894 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'}) 2026-02-28 00:44:13.769913 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'}) 2026-02-28 00:44:13.769927 | orchestrator | 2026-02-28 00:44:13.769940 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-28 00:44:13.769954 | orchestrator | Saturday 28 February 2026 00:44:11 +0000 (0:00:01.533) 0:00:14.245 ***** 2026-02-28 00:44:13.769966 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:13.769981 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:13.770002 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.770082 | orchestrator | 2026-02-28 00:44:13.770096 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-28 00:44:13.770108 | orchestrator | Saturday 28 February 2026 00:44:11 +0000 (0:00:00.170) 0:00:14.415 ***** 2026-02-28 00:44:13.770136 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.770148 | orchestrator | 2026-02-28 00:44:13.770159 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-28 00:44:13.770170 | orchestrator | Saturday 28 February 2026 00:44:11 +0000 (0:00:00.163) 0:00:14.579 ***** 2026-02-28 00:44:13.770181 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:13.770192 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:13.770203 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.770214 | orchestrator | 2026-02-28 00:44:13.770225 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-28 00:44:13.770236 | orchestrator | Saturday 28 February 2026 00:44:12 +0000 (0:00:00.423) 0:00:15.003 ***** 2026-02-28 00:44:13.770247 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.770258 | orchestrator | 2026-02-28 00:44:13.770269 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-28 00:44:13.770280 | orchestrator | Saturday 28 February 2026 00:44:12 +0000 (0:00:00.147) 0:00:15.150 ***** 2026-02-28 00:44:13.770299 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:13.770311 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:13.770322 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.770332 | orchestrator | 2026-02-28 00:44:13.770343 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-28 00:44:13.770354 | orchestrator | Saturday 28 February 2026 00:44:12 +0000 (0:00:00.173) 0:00:15.324 ***** 2026-02-28 00:44:13.770365 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.770376 | orchestrator | 2026-02-28 00:44:13.770387 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-28 00:44:13.770398 | orchestrator | Saturday 28 February 2026 00:44:12 +0000 (0:00:00.149) 0:00:15.473 ***** 2026-02-28 00:44:13.770409 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:13.770420 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:13.770431 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.770442 | orchestrator | 2026-02-28 00:44:13.770453 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-28 00:44:13.770464 | orchestrator | Saturday 28 February 2026 00:44:12 +0000 (0:00:00.163) 0:00:15.637 ***** 2026-02-28 00:44:13.770475 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:13.770486 | orchestrator | 2026-02-28 00:44:13.770496 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-28 00:44:13.770510 | orchestrator | Saturday 28 February 2026 00:44:13 +0000 (0:00:00.159) 0:00:15.796 ***** 2026-02-28 00:44:13.770536 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:13.770563 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:13.770585 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.770603 | orchestrator | 2026-02-28 00:44:13.770621 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-28 00:44:13.770639 | orchestrator | Saturday 28 February 2026 00:44:13 +0000 (0:00:00.180) 0:00:15.977 ***** 2026-02-28 00:44:13.770650 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:13.770661 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:13.770672 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.770683 | orchestrator | 2026-02-28 00:44:13.770694 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-28 00:44:13.770705 | orchestrator | Saturday 28 February 2026 00:44:13 +0000 (0:00:00.185) 0:00:16.162 ***** 2026-02-28 00:44:13.770716 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:13.770727 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:13.770738 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.770751 | orchestrator | 2026-02-28 00:44:13.770769 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-28 00:44:13.770781 | orchestrator | Saturday 28 February 2026 00:44:13 +0000 (0:00:00.155) 0:00:16.318 ***** 2026-02-28 00:44:13.770801 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:13.770812 | orchestrator | 2026-02-28 00:44:13.770823 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-28 00:44:13.770843 | orchestrator | Saturday 28 February 2026 00:44:13 +0000 (0:00:00.113) 0:00:16.432 ***** 2026-02-28 00:44:19.940161 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940226 | orchestrator | 2026-02-28 00:44:19.940236 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-28 00:44:19.940244 | orchestrator | Saturday 28 February 2026 00:44:13 +0000 (0:00:00.118) 0:00:16.551 ***** 2026-02-28 00:44:19.940250 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940257 | orchestrator | 2026-02-28 00:44:19.940263 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-28 00:44:19.940269 | orchestrator | Saturday 28 February 2026 00:44:14 +0000 (0:00:00.137) 0:00:16.688 ***** 2026-02-28 00:44:19.940275 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:44:19.940282 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-28 00:44:19.940288 | orchestrator | } 2026-02-28 00:44:19.940294 | orchestrator | 2026-02-28 00:44:19.940300 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-28 00:44:19.940307 | orchestrator | Saturday 28 February 2026 00:44:14 +0000 (0:00:00.342) 0:00:17.030 ***** 2026-02-28 00:44:19.940313 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:44:19.940320 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-28 00:44:19.940326 | orchestrator | } 2026-02-28 00:44:19.940332 | orchestrator | 2026-02-28 00:44:19.940339 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-28 00:44:19.940345 | orchestrator | Saturday 28 February 2026 00:44:14 +0000 (0:00:00.148) 0:00:17.179 ***** 2026-02-28 00:44:19.940352 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:44:19.940359 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-28 00:44:19.940366 | orchestrator | } 2026-02-28 00:44:19.940372 | orchestrator | 2026-02-28 00:44:19.940376 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-28 00:44:19.940380 | orchestrator | Saturday 28 February 2026 00:44:14 +0000 (0:00:00.127) 0:00:17.307 ***** 2026-02-28 00:44:19.940383 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:19.940387 | orchestrator | 2026-02-28 00:44:19.940391 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-28 00:44:19.940395 | orchestrator | Saturday 28 February 2026 00:44:15 +0000 (0:00:00.632) 0:00:17.939 ***** 2026-02-28 00:44:19.940399 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:19.940402 | orchestrator | 2026-02-28 00:44:19.940406 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-28 00:44:19.940410 | orchestrator | Saturday 28 February 2026 00:44:15 +0000 (0:00:00.512) 0:00:18.451 ***** 2026-02-28 00:44:19.940414 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:19.940417 | orchestrator | 2026-02-28 00:44:19.940421 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-28 00:44:19.940425 | orchestrator | Saturday 28 February 2026 00:44:16 +0000 (0:00:00.527) 0:00:18.978 ***** 2026-02-28 00:44:19.940429 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:19.940433 | orchestrator | 2026-02-28 00:44:19.940436 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-28 00:44:19.940440 | orchestrator | Saturday 28 February 2026 00:44:16 +0000 (0:00:00.134) 0:00:19.112 ***** 2026-02-28 00:44:19.940444 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940448 | orchestrator | 2026-02-28 00:44:19.940452 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-28 00:44:19.940455 | orchestrator | Saturday 28 February 2026 00:44:16 +0000 (0:00:00.110) 0:00:19.223 ***** 2026-02-28 00:44:19.940459 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940463 | orchestrator | 2026-02-28 00:44:19.940467 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-28 00:44:19.940481 | orchestrator | Saturday 28 February 2026 00:44:16 +0000 (0:00:00.110) 0:00:19.334 ***** 2026-02-28 00:44:19.940485 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:44:19.940489 | orchestrator |  "vgs_report": { 2026-02-28 00:44:19.940493 | orchestrator |  "vg": [] 2026-02-28 00:44:19.940496 | orchestrator |  } 2026-02-28 00:44:19.940500 | orchestrator | } 2026-02-28 00:44:19.940504 | orchestrator | 2026-02-28 00:44:19.940508 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-28 00:44:19.940512 | orchestrator | Saturday 28 February 2026 00:44:16 +0000 (0:00:00.125) 0:00:19.459 ***** 2026-02-28 00:44:19.940515 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940519 | orchestrator | 2026-02-28 00:44:19.940532 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-28 00:44:19.940536 | orchestrator | Saturday 28 February 2026 00:44:16 +0000 (0:00:00.129) 0:00:19.588 ***** 2026-02-28 00:44:19.940539 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940543 | orchestrator | 2026-02-28 00:44:19.940547 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-28 00:44:19.940551 | orchestrator | Saturday 28 February 2026 00:44:17 +0000 (0:00:00.134) 0:00:19.723 ***** 2026-02-28 00:44:19.940554 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940558 | orchestrator | 2026-02-28 00:44:19.940562 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-28 00:44:19.940565 | orchestrator | Saturday 28 February 2026 00:44:17 +0000 (0:00:00.274) 0:00:19.997 ***** 2026-02-28 00:44:19.940569 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940573 | orchestrator | 2026-02-28 00:44:19.940577 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-28 00:44:19.940581 | orchestrator | Saturday 28 February 2026 00:44:17 +0000 (0:00:00.133) 0:00:20.131 ***** 2026-02-28 00:44:19.940584 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940588 | orchestrator | 2026-02-28 00:44:19.940592 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-28 00:44:19.940596 | orchestrator | Saturday 28 February 2026 00:44:17 +0000 (0:00:00.169) 0:00:20.301 ***** 2026-02-28 00:44:19.940599 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940603 | orchestrator | 2026-02-28 00:44:19.940607 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-28 00:44:19.940610 | orchestrator | Saturday 28 February 2026 00:44:17 +0000 (0:00:00.121) 0:00:20.423 ***** 2026-02-28 00:44:19.940614 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940618 | orchestrator | 2026-02-28 00:44:19.940622 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-28 00:44:19.940625 | orchestrator | Saturday 28 February 2026 00:44:17 +0000 (0:00:00.126) 0:00:20.549 ***** 2026-02-28 00:44:19.940637 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940641 | orchestrator | 2026-02-28 00:44:19.940645 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-28 00:44:19.940649 | orchestrator | Saturday 28 February 2026 00:44:18 +0000 (0:00:00.129) 0:00:20.679 ***** 2026-02-28 00:44:19.940652 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940656 | orchestrator | 2026-02-28 00:44:19.940660 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-28 00:44:19.940664 | orchestrator | Saturday 28 February 2026 00:44:18 +0000 (0:00:00.110) 0:00:20.790 ***** 2026-02-28 00:44:19.940667 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940671 | orchestrator | 2026-02-28 00:44:19.940675 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-28 00:44:19.940678 | orchestrator | Saturday 28 February 2026 00:44:18 +0000 (0:00:00.122) 0:00:20.912 ***** 2026-02-28 00:44:19.940682 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940686 | orchestrator | 2026-02-28 00:44:19.940690 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-28 00:44:19.940693 | orchestrator | Saturday 28 February 2026 00:44:18 +0000 (0:00:00.133) 0:00:21.045 ***** 2026-02-28 00:44:19.940700 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940704 | orchestrator | 2026-02-28 00:44:19.940707 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-28 00:44:19.940711 | orchestrator | Saturday 28 February 2026 00:44:18 +0000 (0:00:00.132) 0:00:21.177 ***** 2026-02-28 00:44:19.940715 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940718 | orchestrator | 2026-02-28 00:44:19.940722 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-28 00:44:19.940726 | orchestrator | Saturday 28 February 2026 00:44:18 +0000 (0:00:00.139) 0:00:21.317 ***** 2026-02-28 00:44:19.940730 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940733 | orchestrator | 2026-02-28 00:44:19.940737 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-28 00:44:19.940741 | orchestrator | Saturday 28 February 2026 00:44:18 +0000 (0:00:00.125) 0:00:21.442 ***** 2026-02-28 00:44:19.940747 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:19.940754 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:19.940761 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940767 | orchestrator | 2026-02-28 00:44:19.940774 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-28 00:44:19.940780 | orchestrator | Saturday 28 February 2026 00:44:19 +0000 (0:00:00.349) 0:00:21.791 ***** 2026-02-28 00:44:19.940787 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:19.940794 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:19.940802 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940806 | orchestrator | 2026-02-28 00:44:19.940810 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-28 00:44:19.940817 | orchestrator | Saturday 28 February 2026 00:44:19 +0000 (0:00:00.146) 0:00:21.938 ***** 2026-02-28 00:44:19.940824 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:19.940830 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:19.940837 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940843 | orchestrator | 2026-02-28 00:44:19.940849 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-28 00:44:19.940855 | orchestrator | Saturday 28 February 2026 00:44:19 +0000 (0:00:00.175) 0:00:22.113 ***** 2026-02-28 00:44:19.940863 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:19.940869 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:19.940875 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940881 | orchestrator | 2026-02-28 00:44:19.940888 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-28 00:44:19.940894 | orchestrator | Saturday 28 February 2026 00:44:19 +0000 (0:00:00.178) 0:00:22.292 ***** 2026-02-28 00:44:19.940901 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:19.940907 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:19.940918 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:19.940926 | orchestrator | 2026-02-28 00:44:19.940930 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-28 00:44:19.940935 | orchestrator | Saturday 28 February 2026 00:44:19 +0000 (0:00:00.146) 0:00:22.439 ***** 2026-02-28 00:44:19.940945 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:25.936142 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:25.936236 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:25.936250 | orchestrator | 2026-02-28 00:44:25.936262 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-28 00:44:25.936274 | orchestrator | Saturday 28 February 2026 00:44:19 +0000 (0:00:00.166) 0:00:22.605 ***** 2026-02-28 00:44:25.936284 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:25.936295 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:25.936305 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:25.936314 | orchestrator | 2026-02-28 00:44:25.936325 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-28 00:44:25.936334 | orchestrator | Saturday 28 February 2026 00:44:20 +0000 (0:00:00.186) 0:00:22.792 ***** 2026-02-28 00:44:25.936344 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:25.936355 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:25.936365 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:25.936374 | orchestrator | 2026-02-28 00:44:25.936384 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-28 00:44:25.936394 | orchestrator | Saturday 28 February 2026 00:44:20 +0000 (0:00:00.191) 0:00:22.983 ***** 2026-02-28 00:44:25.936404 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:25.936415 | orchestrator | 2026-02-28 00:44:25.936424 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-28 00:44:25.936434 | orchestrator | Saturday 28 February 2026 00:44:20 +0000 (0:00:00.531) 0:00:23.515 ***** 2026-02-28 00:44:25.936444 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:25.936453 | orchestrator | 2026-02-28 00:44:25.936463 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-28 00:44:25.936473 | orchestrator | Saturday 28 February 2026 00:44:21 +0000 (0:00:00.532) 0:00:24.048 ***** 2026-02-28 00:44:25.936482 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:25.936492 | orchestrator | 2026-02-28 00:44:25.936502 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-28 00:44:25.936512 | orchestrator | Saturday 28 February 2026 00:44:21 +0000 (0:00:00.216) 0:00:24.264 ***** 2026-02-28 00:44:25.936522 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'vg_name': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'}) 2026-02-28 00:44:25.936533 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'vg_name': 'ceph-ee950762-4564-5222-9e83-52313bf46222'}) 2026-02-28 00:44:25.936543 | orchestrator | 2026-02-28 00:44:25.936553 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-28 00:44:25.936562 | orchestrator | Saturday 28 February 2026 00:44:21 +0000 (0:00:00.192) 0:00:24.457 ***** 2026-02-28 00:44:25.936572 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:25.936605 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:25.936616 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:25.936627 | orchestrator | 2026-02-28 00:44:25.936638 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-28 00:44:25.936650 | orchestrator | Saturday 28 February 2026 00:44:22 +0000 (0:00:00.515) 0:00:24.972 ***** 2026-02-28 00:44:25.936661 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:25.936673 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:25.936684 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:25.936695 | orchestrator | 2026-02-28 00:44:25.936706 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-28 00:44:25.936717 | orchestrator | Saturday 28 February 2026 00:44:22 +0000 (0:00:00.186) 0:00:25.159 ***** 2026-02-28 00:44:25.936729 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'})  2026-02-28 00:44:25.936740 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'})  2026-02-28 00:44:25.936751 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:25.936762 | orchestrator | 2026-02-28 00:44:25.936774 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-28 00:44:25.936785 | orchestrator | Saturday 28 February 2026 00:44:22 +0000 (0:00:00.183) 0:00:25.342 ***** 2026-02-28 00:44:25.936812 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:44:25.936824 | orchestrator |  "lvm_report": { 2026-02-28 00:44:25.936835 | orchestrator |  "lv": [ 2026-02-28 00:44:25.936846 | orchestrator |  { 2026-02-28 00:44:25.936857 | orchestrator |  "lv_name": "osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741", 2026-02-28 00:44:25.936870 | orchestrator |  "vg_name": "ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741" 2026-02-28 00:44:25.936881 | orchestrator |  }, 2026-02-28 00:44:25.936892 | orchestrator |  { 2026-02-28 00:44:25.936903 | orchestrator |  "lv_name": "osd-block-ee950762-4564-5222-9e83-52313bf46222", 2026-02-28 00:44:25.936913 | orchestrator |  "vg_name": "ceph-ee950762-4564-5222-9e83-52313bf46222" 2026-02-28 00:44:25.936925 | orchestrator |  } 2026-02-28 00:44:25.936936 | orchestrator |  ], 2026-02-28 00:44:25.936947 | orchestrator |  "pv": [ 2026-02-28 00:44:25.936956 | orchestrator |  { 2026-02-28 00:44:25.936966 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-28 00:44:25.936976 | orchestrator |  "vg_name": "ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741" 2026-02-28 00:44:25.936986 | orchestrator |  }, 2026-02-28 00:44:25.936995 | orchestrator |  { 2026-02-28 00:44:25.937022 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-28 00:44:25.937050 | orchestrator |  "vg_name": "ceph-ee950762-4564-5222-9e83-52313bf46222" 2026-02-28 00:44:25.937060 | orchestrator |  } 2026-02-28 00:44:25.937070 | orchestrator |  ] 2026-02-28 00:44:25.937079 | orchestrator |  } 2026-02-28 00:44:25.937089 | orchestrator | } 2026-02-28 00:44:25.937099 | orchestrator | 2026-02-28 00:44:25.937109 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-28 00:44:25.937119 | orchestrator | 2026-02-28 00:44:25.937128 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:44:25.937138 | orchestrator | Saturday 28 February 2026 00:44:23 +0000 (0:00:00.338) 0:00:25.681 ***** 2026-02-28 00:44:25.937156 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-28 00:44:25.937165 | orchestrator | 2026-02-28 00:44:25.937175 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:44:25.937185 | orchestrator | Saturday 28 February 2026 00:44:23 +0000 (0:00:00.252) 0:00:25.934 ***** 2026-02-28 00:44:25.937194 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:25.937204 | orchestrator | 2026-02-28 00:44:25.937214 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:25.937224 | orchestrator | Saturday 28 February 2026 00:44:23 +0000 (0:00:00.287) 0:00:26.222 ***** 2026-02-28 00:44:25.937234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-28 00:44:25.937243 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-28 00:44:25.937253 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-28 00:44:25.937263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-28 00:44:25.937273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-28 00:44:25.937282 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-28 00:44:25.937292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-28 00:44:25.937306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-28 00:44:25.937317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-28 00:44:25.937326 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-28 00:44:25.937336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-28 00:44:25.937346 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-28 00:44:25.937355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-28 00:44:25.937365 | orchestrator | 2026-02-28 00:44:25.937374 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:25.937384 | orchestrator | Saturday 28 February 2026 00:44:24 +0000 (0:00:00.448) 0:00:26.670 ***** 2026-02-28 00:44:25.937394 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:25.937403 | orchestrator | 2026-02-28 00:44:25.937413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:25.937423 | orchestrator | Saturday 28 February 2026 00:44:24 +0000 (0:00:00.206) 0:00:26.877 ***** 2026-02-28 00:44:25.937432 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:25.937442 | orchestrator | 2026-02-28 00:44:25.937451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:25.937461 | orchestrator | Saturday 28 February 2026 00:44:24 +0000 (0:00:00.204) 0:00:27.082 ***** 2026-02-28 00:44:25.937471 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:25.937481 | orchestrator | 2026-02-28 00:44:25.937490 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:25.937500 | orchestrator | Saturday 28 February 2026 00:44:25 +0000 (0:00:00.826) 0:00:27.909 ***** 2026-02-28 00:44:25.937509 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:25.937519 | orchestrator | 2026-02-28 00:44:25.937529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:25.937538 | orchestrator | Saturday 28 February 2026 00:44:25 +0000 (0:00:00.232) 0:00:28.141 ***** 2026-02-28 00:44:25.937548 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:25.937558 | orchestrator | 2026-02-28 00:44:25.937567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:25.937577 | orchestrator | Saturday 28 February 2026 00:44:25 +0000 (0:00:00.221) 0:00:28.363 ***** 2026-02-28 00:44:25.937602 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:25.937612 | orchestrator | 2026-02-28 00:44:25.937628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:38.238253 | orchestrator | Saturday 28 February 2026 00:44:25 +0000 (0:00:00.233) 0:00:28.597 ***** 2026-02-28 00:44:38.238331 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.238342 | orchestrator | 2026-02-28 00:44:38.238350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:38.238357 | orchestrator | Saturday 28 February 2026 00:44:26 +0000 (0:00:00.240) 0:00:28.837 ***** 2026-02-28 00:44:38.238364 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.238371 | orchestrator | 2026-02-28 00:44:38.238379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:38.238386 | orchestrator | Saturday 28 February 2026 00:44:26 +0000 (0:00:00.239) 0:00:29.076 ***** 2026-02-28 00:44:38.238393 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20) 2026-02-28 00:44:38.238401 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20) 2026-02-28 00:44:38.238408 | orchestrator | 2026-02-28 00:44:38.238415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:38.238421 | orchestrator | Saturday 28 February 2026 00:44:26 +0000 (0:00:00.424) 0:00:29.500 ***** 2026-02-28 00:44:38.238428 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0e1c50ba-f800-4f3f-b273-e42be7614723) 2026-02-28 00:44:38.238435 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0e1c50ba-f800-4f3f-b273-e42be7614723) 2026-02-28 00:44:38.238441 | orchestrator | 2026-02-28 00:44:38.238448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:38.238455 | orchestrator | Saturday 28 February 2026 00:44:27 +0000 (0:00:00.444) 0:00:29.945 ***** 2026-02-28 00:44:38.238461 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e9cea570-02f9-4492-a688-e95ec43126f4) 2026-02-28 00:44:38.238468 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e9cea570-02f9-4492-a688-e95ec43126f4) 2026-02-28 00:44:38.238475 | orchestrator | 2026-02-28 00:44:38.238481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:38.238488 | orchestrator | Saturday 28 February 2026 00:44:27 +0000 (0:00:00.511) 0:00:30.456 ***** 2026-02-28 00:44:38.238495 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_dc966f00-bd76-481b-987a-91131c9d0b5a) 2026-02-28 00:44:38.238501 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_dc966f00-bd76-481b-987a-91131c9d0b5a) 2026-02-28 00:44:38.238508 | orchestrator | 2026-02-28 00:44:38.238515 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:38.238522 | orchestrator | Saturday 28 February 2026 00:44:28 +0000 (0:00:00.684) 0:00:31.141 ***** 2026-02-28 00:44:38.238528 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:44:38.238535 | orchestrator | 2026-02-28 00:44:38.238542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:38.238549 | orchestrator | Saturday 28 February 2026 00:44:29 +0000 (0:00:00.580) 0:00:31.722 ***** 2026-02-28 00:44:38.238570 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-28 00:44:38.238578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-28 00:44:38.238585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-28 00:44:38.238591 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-28 00:44:38.238598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-28 00:44:38.238605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-28 00:44:38.238628 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-28 00:44:38.238635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-28 00:44:38.238642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-28 00:44:38.238648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-28 00:44:38.238655 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-28 00:44:38.238661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-28 00:44:38.238668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-28 00:44:38.238675 | orchestrator | 2026-02-28 00:44:38.238681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:38.238688 | orchestrator | Saturday 28 February 2026 00:44:30 +0000 (0:00:01.125) 0:00:32.847 ***** 2026-02-28 00:44:38.238695 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.238701 | orchestrator | 2026-02-28 00:44:38.238708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:38.238715 | orchestrator | Saturday 28 February 2026 00:44:30 +0000 (0:00:00.218) 0:00:33.066 ***** 2026-02-28 00:44:38.238722 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.238728 | orchestrator | 2026-02-28 00:44:38.238735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:38.238742 | orchestrator | Saturday 28 February 2026 00:44:30 +0000 (0:00:00.256) 0:00:33.323 ***** 2026-02-28 00:44:38.238749 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.238755 | orchestrator | 2026-02-28 00:44:38.238774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:38.238781 | orchestrator | Saturday 28 February 2026 00:44:30 +0000 (0:00:00.233) 0:00:33.556 ***** 2026-02-28 00:44:38.238788 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.238795 | orchestrator | 2026-02-28 00:44:38.238803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:38.238810 | orchestrator | Saturday 28 February 2026 00:44:31 +0000 (0:00:00.213) 0:00:33.770 ***** 2026-02-28 00:44:38.238818 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.238825 | orchestrator | 2026-02-28 00:44:38.238832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:38.238840 | orchestrator | Saturday 28 February 2026 00:44:31 +0000 (0:00:00.206) 0:00:33.976 ***** 2026-02-28 00:44:38.238847 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.238855 | orchestrator | 2026-02-28 00:44:38.238863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:38.238870 | orchestrator | Saturday 28 February 2026 00:44:31 +0000 (0:00:00.222) 0:00:34.198 ***** 2026-02-28 00:44:38.238878 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.238885 | orchestrator | 2026-02-28 00:44:38.238893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:38.238901 | orchestrator | Saturday 28 February 2026 00:44:31 +0000 (0:00:00.251) 0:00:34.450 ***** 2026-02-28 00:44:38.238908 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.238916 | orchestrator | 2026-02-28 00:44:38.238923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:38.238931 | orchestrator | Saturday 28 February 2026 00:44:31 +0000 (0:00:00.204) 0:00:34.654 ***** 2026-02-28 00:44:38.238938 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-28 00:44:38.238946 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-28 00:44:38.238953 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-28 00:44:38.238961 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-28 00:44:38.238968 | orchestrator | 2026-02-28 00:44:38.238976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:38.238992 | orchestrator | Saturday 28 February 2026 00:44:33 +0000 (0:00:01.046) 0:00:35.701 ***** 2026-02-28 00:44:38.238999 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.239029 | orchestrator | 2026-02-28 00:44:38.239037 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:38.239044 | orchestrator | Saturday 28 February 2026 00:44:33 +0000 (0:00:00.197) 0:00:35.899 ***** 2026-02-28 00:44:38.239052 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.239059 | orchestrator | 2026-02-28 00:44:38.239067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:38.239092 | orchestrator | Saturday 28 February 2026 00:44:33 +0000 (0:00:00.705) 0:00:36.604 ***** 2026-02-28 00:44:38.239100 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.239107 | orchestrator | 2026-02-28 00:44:38.239115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:38.239123 | orchestrator | Saturday 28 February 2026 00:44:34 +0000 (0:00:00.232) 0:00:36.837 ***** 2026-02-28 00:44:38.239130 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.239138 | orchestrator | 2026-02-28 00:44:38.239145 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-28 00:44:38.239154 | orchestrator | Saturday 28 February 2026 00:44:34 +0000 (0:00:00.212) 0:00:37.049 ***** 2026-02-28 00:44:38.239162 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.239169 | orchestrator | 2026-02-28 00:44:38.239177 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-28 00:44:38.239185 | orchestrator | Saturday 28 February 2026 00:44:34 +0000 (0:00:00.135) 0:00:37.185 ***** 2026-02-28 00:44:38.239193 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b073c23-7edc-573a-a84d-7267a4d3e426'}}) 2026-02-28 00:44:38.239202 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b30b5faa-3070-5965-91f3-7d8dbacf19e9'}}) 2026-02-28 00:44:38.239209 | orchestrator | 2026-02-28 00:44:38.239215 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-28 00:44:38.239222 | orchestrator | Saturday 28 February 2026 00:44:34 +0000 (0:00:00.194) 0:00:37.379 ***** 2026-02-28 00:44:38.239230 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'}) 2026-02-28 00:44:38.239238 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'}) 2026-02-28 00:44:38.239245 | orchestrator | 2026-02-28 00:44:38.239252 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-28 00:44:38.239258 | orchestrator | Saturday 28 February 2026 00:44:36 +0000 (0:00:01.910) 0:00:39.290 ***** 2026-02-28 00:44:38.239265 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:38.239273 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:38.239280 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:38.239286 | orchestrator | 2026-02-28 00:44:38.239293 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-28 00:44:38.239300 | orchestrator | Saturday 28 February 2026 00:44:36 +0000 (0:00:00.184) 0:00:39.475 ***** 2026-02-28 00:44:38.239306 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'}) 2026-02-28 00:44:38.239318 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'}) 2026-02-28 00:44:44.453449 | orchestrator | 2026-02-28 00:44:44.453556 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-28 00:44:44.453589 | orchestrator | Saturday 28 February 2026 00:44:38 +0000 (0:00:01.421) 0:00:40.896 ***** 2026-02-28 00:44:44.453612 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:44.453622 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:44.453629 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.453637 | orchestrator | 2026-02-28 00:44:44.453644 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-28 00:44:44.453651 | orchestrator | Saturday 28 February 2026 00:44:38 +0000 (0:00:00.172) 0:00:41.069 ***** 2026-02-28 00:44:44.453657 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.453665 | orchestrator | 2026-02-28 00:44:44.453672 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-28 00:44:44.453678 | orchestrator | Saturday 28 February 2026 00:44:38 +0000 (0:00:00.241) 0:00:41.310 ***** 2026-02-28 00:44:44.453685 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:44.453692 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:44.453698 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.453705 | orchestrator | 2026-02-28 00:44:44.453712 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-28 00:44:44.453718 | orchestrator | Saturday 28 February 2026 00:44:38 +0000 (0:00:00.172) 0:00:41.483 ***** 2026-02-28 00:44:44.453725 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.453731 | orchestrator | 2026-02-28 00:44:44.453738 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-28 00:44:44.453745 | orchestrator | Saturday 28 February 2026 00:44:38 +0000 (0:00:00.164) 0:00:41.648 ***** 2026-02-28 00:44:44.453751 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:44.453758 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:44.453765 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.453771 | orchestrator | 2026-02-28 00:44:44.453778 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-28 00:44:44.453788 | orchestrator | Saturday 28 February 2026 00:44:39 +0000 (0:00:00.506) 0:00:42.154 ***** 2026-02-28 00:44:44.453795 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.453802 | orchestrator | 2026-02-28 00:44:44.453808 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-28 00:44:44.453815 | orchestrator | Saturday 28 February 2026 00:44:39 +0000 (0:00:00.134) 0:00:42.289 ***** 2026-02-28 00:44:44.453822 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:44.453828 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:44.453835 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.453842 | orchestrator | 2026-02-28 00:44:44.453849 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-28 00:44:44.453855 | orchestrator | Saturday 28 February 2026 00:44:39 +0000 (0:00:00.186) 0:00:42.475 ***** 2026-02-28 00:44:44.453862 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:44.453869 | orchestrator | 2026-02-28 00:44:44.453876 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-28 00:44:44.453889 | orchestrator | Saturday 28 February 2026 00:44:39 +0000 (0:00:00.166) 0:00:42.641 ***** 2026-02-28 00:44:44.453896 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:44.453903 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:44.453909 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.453916 | orchestrator | 2026-02-28 00:44:44.453923 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-28 00:44:44.453930 | orchestrator | Saturday 28 February 2026 00:44:40 +0000 (0:00:00.163) 0:00:42.805 ***** 2026-02-28 00:44:44.453936 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:44.453943 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:44.453950 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.453956 | orchestrator | 2026-02-28 00:44:44.453963 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-28 00:44:44.453985 | orchestrator | Saturday 28 February 2026 00:44:40 +0000 (0:00:00.157) 0:00:42.963 ***** 2026-02-28 00:44:44.453993 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:44.454062 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:44.454073 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.454080 | orchestrator | 2026-02-28 00:44:44.454088 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-28 00:44:44.454096 | orchestrator | Saturday 28 February 2026 00:44:40 +0000 (0:00:00.169) 0:00:43.133 ***** 2026-02-28 00:44:44.454103 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.454111 | orchestrator | 2026-02-28 00:44:44.454119 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-28 00:44:44.454127 | orchestrator | Saturday 28 February 2026 00:44:40 +0000 (0:00:00.202) 0:00:43.335 ***** 2026-02-28 00:44:44.454135 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.454142 | orchestrator | 2026-02-28 00:44:44.454150 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-28 00:44:44.454157 | orchestrator | Saturday 28 February 2026 00:44:40 +0000 (0:00:00.149) 0:00:43.484 ***** 2026-02-28 00:44:44.454165 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.454172 | orchestrator | 2026-02-28 00:44:44.454180 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-28 00:44:44.454187 | orchestrator | Saturday 28 February 2026 00:44:40 +0000 (0:00:00.144) 0:00:43.628 ***** 2026-02-28 00:44:44.454195 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:44:44.454203 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-28 00:44:44.454211 | orchestrator | } 2026-02-28 00:44:44.454218 | orchestrator | 2026-02-28 00:44:44.454226 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-28 00:44:44.454233 | orchestrator | Saturday 28 February 2026 00:44:41 +0000 (0:00:00.145) 0:00:43.774 ***** 2026-02-28 00:44:44.454241 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:44:44.454249 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-28 00:44:44.454256 | orchestrator | } 2026-02-28 00:44:44.454263 | orchestrator | 2026-02-28 00:44:44.454271 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-28 00:44:44.454279 | orchestrator | Saturday 28 February 2026 00:44:41 +0000 (0:00:00.145) 0:00:43.919 ***** 2026-02-28 00:44:44.454292 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:44:44.454300 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-28 00:44:44.454308 | orchestrator | } 2026-02-28 00:44:44.454315 | orchestrator | 2026-02-28 00:44:44.454323 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-28 00:44:44.454331 | orchestrator | Saturday 28 February 2026 00:44:41 +0000 (0:00:00.365) 0:00:44.285 ***** 2026-02-28 00:44:44.454338 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:44.454344 | orchestrator | 2026-02-28 00:44:44.454351 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-28 00:44:44.454362 | orchestrator | Saturday 28 February 2026 00:44:42 +0000 (0:00:00.577) 0:00:44.862 ***** 2026-02-28 00:44:44.454369 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:44.454376 | orchestrator | 2026-02-28 00:44:44.454383 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-28 00:44:44.454389 | orchestrator | Saturday 28 February 2026 00:44:42 +0000 (0:00:00.531) 0:00:45.394 ***** 2026-02-28 00:44:44.454396 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:44.454403 | orchestrator | 2026-02-28 00:44:44.454409 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-28 00:44:44.454416 | orchestrator | Saturday 28 February 2026 00:44:43 +0000 (0:00:00.524) 0:00:45.919 ***** 2026-02-28 00:44:44.454422 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:44.454429 | orchestrator | 2026-02-28 00:44:44.454436 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-28 00:44:44.454443 | orchestrator | Saturday 28 February 2026 00:44:43 +0000 (0:00:00.176) 0:00:46.095 ***** 2026-02-28 00:44:44.454449 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.454456 | orchestrator | 2026-02-28 00:44:44.454463 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-28 00:44:44.454469 | orchestrator | Saturday 28 February 2026 00:44:43 +0000 (0:00:00.118) 0:00:46.213 ***** 2026-02-28 00:44:44.454476 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.454482 | orchestrator | 2026-02-28 00:44:44.454489 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-28 00:44:44.454496 | orchestrator | Saturday 28 February 2026 00:44:43 +0000 (0:00:00.114) 0:00:46.328 ***** 2026-02-28 00:44:44.454502 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:44:44.454509 | orchestrator |  "vgs_report": { 2026-02-28 00:44:44.454516 | orchestrator |  "vg": [] 2026-02-28 00:44:44.454522 | orchestrator |  } 2026-02-28 00:44:44.454529 | orchestrator | } 2026-02-28 00:44:44.454536 | orchestrator | 2026-02-28 00:44:44.454542 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-28 00:44:44.454549 | orchestrator | Saturday 28 February 2026 00:44:43 +0000 (0:00:00.146) 0:00:46.474 ***** 2026-02-28 00:44:44.454556 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.454562 | orchestrator | 2026-02-28 00:44:44.454569 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-28 00:44:44.454576 | orchestrator | Saturday 28 February 2026 00:44:43 +0000 (0:00:00.124) 0:00:46.599 ***** 2026-02-28 00:44:44.454582 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.454589 | orchestrator | 2026-02-28 00:44:44.454596 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-28 00:44:44.454602 | orchestrator | Saturday 28 February 2026 00:44:44 +0000 (0:00:00.182) 0:00:46.781 ***** 2026-02-28 00:44:44.454609 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.454616 | orchestrator | 2026-02-28 00:44:44.454622 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-28 00:44:44.454629 | orchestrator | Saturday 28 February 2026 00:44:44 +0000 (0:00:00.168) 0:00:46.950 ***** 2026-02-28 00:44:44.454636 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:44.454643 | orchestrator | 2026-02-28 00:44:44.454655 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-28 00:44:49.356971 | orchestrator | Saturday 28 February 2026 00:44:44 +0000 (0:00:00.163) 0:00:47.114 ***** 2026-02-28 00:44:49.357095 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357103 | orchestrator | 2026-02-28 00:44:49.357108 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-28 00:44:49.357112 | orchestrator | Saturday 28 February 2026 00:44:44 +0000 (0:00:00.368) 0:00:47.482 ***** 2026-02-28 00:44:49.357116 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357120 | orchestrator | 2026-02-28 00:44:49.357124 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-28 00:44:49.357128 | orchestrator | Saturday 28 February 2026 00:44:44 +0000 (0:00:00.124) 0:00:47.606 ***** 2026-02-28 00:44:49.357132 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357136 | orchestrator | 2026-02-28 00:44:49.357140 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-28 00:44:49.357144 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.138) 0:00:47.745 ***** 2026-02-28 00:44:49.357148 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357151 | orchestrator | 2026-02-28 00:44:49.357155 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-28 00:44:49.357159 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.156) 0:00:47.902 ***** 2026-02-28 00:44:49.357163 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357167 | orchestrator | 2026-02-28 00:44:49.357170 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-28 00:44:49.357174 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.158) 0:00:48.060 ***** 2026-02-28 00:44:49.357178 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357182 | orchestrator | 2026-02-28 00:44:49.357186 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-28 00:44:49.357190 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.161) 0:00:48.222 ***** 2026-02-28 00:44:49.357193 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357197 | orchestrator | 2026-02-28 00:44:49.357201 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-28 00:44:49.357205 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.146) 0:00:48.369 ***** 2026-02-28 00:44:49.357209 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357212 | orchestrator | 2026-02-28 00:44:49.357216 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-28 00:44:49.357220 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.138) 0:00:48.508 ***** 2026-02-28 00:44:49.357224 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357228 | orchestrator | 2026-02-28 00:44:49.357231 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-28 00:44:49.357235 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.150) 0:00:48.658 ***** 2026-02-28 00:44:49.357239 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357243 | orchestrator | 2026-02-28 00:44:49.357247 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-28 00:44:49.357251 | orchestrator | Saturday 28 February 2026 00:44:46 +0000 (0:00:00.143) 0:00:48.802 ***** 2026-02-28 00:44:49.357257 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:49.357262 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:49.357266 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357270 | orchestrator | 2026-02-28 00:44:49.357274 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-28 00:44:49.357278 | orchestrator | Saturday 28 February 2026 00:44:46 +0000 (0:00:00.163) 0:00:48.966 ***** 2026-02-28 00:44:49.357282 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:49.357291 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:49.357294 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357298 | orchestrator | 2026-02-28 00:44:49.357302 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-28 00:44:49.357306 | orchestrator | Saturday 28 February 2026 00:44:46 +0000 (0:00:00.146) 0:00:49.113 ***** 2026-02-28 00:44:49.357310 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:49.357313 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:49.357317 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357321 | orchestrator | 2026-02-28 00:44:49.357325 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-28 00:44:49.357329 | orchestrator | Saturday 28 February 2026 00:44:46 +0000 (0:00:00.387) 0:00:49.500 ***** 2026-02-28 00:44:49.357332 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:49.357336 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:49.357340 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357344 | orchestrator | 2026-02-28 00:44:49.357358 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-28 00:44:49.357362 | orchestrator | Saturday 28 February 2026 00:44:46 +0000 (0:00:00.152) 0:00:49.653 ***** 2026-02-28 00:44:49.357366 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:49.357370 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:49.357374 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357378 | orchestrator | 2026-02-28 00:44:49.357381 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-28 00:44:49.357385 | orchestrator | Saturday 28 February 2026 00:44:47 +0000 (0:00:00.166) 0:00:49.819 ***** 2026-02-28 00:44:49.357389 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:49.357393 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:49.357397 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357401 | orchestrator | 2026-02-28 00:44:49.357405 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-28 00:44:49.357408 | orchestrator | Saturday 28 February 2026 00:44:47 +0000 (0:00:00.158) 0:00:49.978 ***** 2026-02-28 00:44:49.357444 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:49.357448 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:49.357452 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357456 | orchestrator | 2026-02-28 00:44:49.357460 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-28 00:44:49.357464 | orchestrator | Saturday 28 February 2026 00:44:47 +0000 (0:00:00.169) 0:00:50.148 ***** 2026-02-28 00:44:49.357467 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:49.357475 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:49.357481 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357485 | orchestrator | 2026-02-28 00:44:49.357489 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-28 00:44:49.357493 | orchestrator | Saturday 28 February 2026 00:44:47 +0000 (0:00:00.150) 0:00:50.298 ***** 2026-02-28 00:44:49.357497 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:49.357501 | orchestrator | 2026-02-28 00:44:49.357504 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-28 00:44:49.357508 | orchestrator | Saturday 28 February 2026 00:44:48 +0000 (0:00:00.499) 0:00:50.798 ***** 2026-02-28 00:44:49.357512 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:49.357516 | orchestrator | 2026-02-28 00:44:49.357520 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-28 00:44:49.357523 | orchestrator | Saturday 28 February 2026 00:44:48 +0000 (0:00:00.526) 0:00:51.324 ***** 2026-02-28 00:44:49.357527 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:49.357531 | orchestrator | 2026-02-28 00:44:49.357535 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-28 00:44:49.357539 | orchestrator | Saturday 28 February 2026 00:44:48 +0000 (0:00:00.127) 0:00:51.452 ***** 2026-02-28 00:44:49.357543 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'vg_name': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'}) 2026-02-28 00:44:49.357549 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'vg_name': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'}) 2026-02-28 00:44:49.357553 | orchestrator | 2026-02-28 00:44:49.357558 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-28 00:44:49.357562 | orchestrator | Saturday 28 February 2026 00:44:48 +0000 (0:00:00.176) 0:00:51.628 ***** 2026-02-28 00:44:49.357566 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:49.357571 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:49.357575 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:49.357579 | orchestrator | 2026-02-28 00:44:49.357583 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-28 00:44:49.357588 | orchestrator | Saturday 28 February 2026 00:44:49 +0000 (0:00:00.195) 0:00:51.824 ***** 2026-02-28 00:44:49.357592 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:49.357599 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:55.735892 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:55.736045 | orchestrator | 2026-02-28 00:44:55.736064 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-28 00:44:55.736077 | orchestrator | Saturday 28 February 2026 00:44:49 +0000 (0:00:00.195) 0:00:52.019 ***** 2026-02-28 00:44:55.736090 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'})  2026-02-28 00:44:55.736103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'})  2026-02-28 00:44:55.736114 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:55.736125 | orchestrator | 2026-02-28 00:44:55.736136 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-28 00:44:55.736171 | orchestrator | Saturday 28 February 2026 00:44:49 +0000 (0:00:00.168) 0:00:52.187 ***** 2026-02-28 00:44:55.736184 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:44:55.736195 | orchestrator |  "lvm_report": { 2026-02-28 00:44:55.736207 | orchestrator |  "lv": [ 2026-02-28 00:44:55.736217 | orchestrator |  { 2026-02-28 00:44:55.736228 | orchestrator |  "lv_name": "osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426", 2026-02-28 00:44:55.736240 | orchestrator |  "vg_name": "ceph-7b073c23-7edc-573a-a84d-7267a4d3e426" 2026-02-28 00:44:55.736251 | orchestrator |  }, 2026-02-28 00:44:55.736262 | orchestrator |  { 2026-02-28 00:44:55.736273 | orchestrator |  "lv_name": "osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9", 2026-02-28 00:44:55.736283 | orchestrator |  "vg_name": "ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9" 2026-02-28 00:44:55.736294 | orchestrator |  } 2026-02-28 00:44:55.736305 | orchestrator |  ], 2026-02-28 00:44:55.736316 | orchestrator |  "pv": [ 2026-02-28 00:44:55.736326 | orchestrator |  { 2026-02-28 00:44:55.736337 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-28 00:44:55.736348 | orchestrator |  "vg_name": "ceph-7b073c23-7edc-573a-a84d-7267a4d3e426" 2026-02-28 00:44:55.736359 | orchestrator |  }, 2026-02-28 00:44:55.736369 | orchestrator |  { 2026-02-28 00:44:55.736380 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-28 00:44:55.736391 | orchestrator |  "vg_name": "ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9" 2026-02-28 00:44:55.736402 | orchestrator |  } 2026-02-28 00:44:55.736412 | orchestrator |  ] 2026-02-28 00:44:55.736423 | orchestrator |  } 2026-02-28 00:44:55.736436 | orchestrator | } 2026-02-28 00:44:55.736449 | orchestrator | 2026-02-28 00:44:55.736461 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-28 00:44:55.736473 | orchestrator | 2026-02-28 00:44:55.736486 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:44:55.736498 | orchestrator | Saturday 28 February 2026 00:44:50 +0000 (0:00:00.499) 0:00:52.687 ***** 2026-02-28 00:44:55.736527 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-28 00:44:55.736539 | orchestrator | 2026-02-28 00:44:55.736550 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:44:55.736561 | orchestrator | Saturday 28 February 2026 00:44:50 +0000 (0:00:00.279) 0:00:52.967 ***** 2026-02-28 00:44:55.736572 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:44:55.736583 | orchestrator | 2026-02-28 00:44:55.736594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:55.736605 | orchestrator | Saturday 28 February 2026 00:44:50 +0000 (0:00:00.229) 0:00:53.197 ***** 2026-02-28 00:44:55.736616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-28 00:44:55.736627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-28 00:44:55.736638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-28 00:44:55.736649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-28 00:44:55.736660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-28 00:44:55.736671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-28 00:44:55.736681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-28 00:44:55.736692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-28 00:44:55.736703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-28 00:44:55.736713 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-28 00:44:55.736733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-28 00:44:55.736744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-28 00:44:55.736754 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-28 00:44:55.736765 | orchestrator | 2026-02-28 00:44:55.736776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:55.736791 | orchestrator | Saturday 28 February 2026 00:44:50 +0000 (0:00:00.430) 0:00:53.627 ***** 2026-02-28 00:44:55.736802 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.736813 | orchestrator | 2026-02-28 00:44:55.736824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:55.736835 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.210) 0:00:53.837 ***** 2026-02-28 00:44:55.736846 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.736856 | orchestrator | 2026-02-28 00:44:55.736867 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:55.736896 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.191) 0:00:54.029 ***** 2026-02-28 00:44:55.736908 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.736919 | orchestrator | 2026-02-28 00:44:55.736930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:55.736941 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.183) 0:00:54.212 ***** 2026-02-28 00:44:55.736951 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.736962 | orchestrator | 2026-02-28 00:44:55.736973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:55.736983 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.207) 0:00:54.419 ***** 2026-02-28 00:44:55.736994 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.737023 | orchestrator | 2026-02-28 00:44:55.737034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:55.737045 | orchestrator | Saturday 28 February 2026 00:44:52 +0000 (0:00:00.666) 0:00:55.086 ***** 2026-02-28 00:44:55.737056 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.737067 | orchestrator | 2026-02-28 00:44:55.737078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:55.737089 | orchestrator | Saturday 28 February 2026 00:44:52 +0000 (0:00:00.207) 0:00:55.293 ***** 2026-02-28 00:44:55.737099 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.737111 | orchestrator | 2026-02-28 00:44:55.737122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:55.737133 | orchestrator | Saturday 28 February 2026 00:44:52 +0000 (0:00:00.218) 0:00:55.512 ***** 2026-02-28 00:44:55.737143 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.737154 | orchestrator | 2026-02-28 00:44:55.737165 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:55.737176 | orchestrator | Saturday 28 February 2026 00:44:53 +0000 (0:00:00.206) 0:00:55.718 ***** 2026-02-28 00:44:55.737186 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd) 2026-02-28 00:44:55.737199 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd) 2026-02-28 00:44:55.737210 | orchestrator | 2026-02-28 00:44:55.737221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:55.737232 | orchestrator | Saturday 28 February 2026 00:44:53 +0000 (0:00:00.470) 0:00:56.189 ***** 2026-02-28 00:44:55.737243 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2e8d751a-8ce2-4e55-9ca5-cbc1ced8bcd0) 2026-02-28 00:44:55.737253 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2e8d751a-8ce2-4e55-9ca5-cbc1ced8bcd0) 2026-02-28 00:44:55.737264 | orchestrator | 2026-02-28 00:44:55.737275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:55.737299 | orchestrator | Saturday 28 February 2026 00:44:54 +0000 (0:00:00.479) 0:00:56.669 ***** 2026-02-28 00:44:55.737310 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_35a5842a-f5e3-41cd-9ad4-9887af65562b) 2026-02-28 00:44:55.737321 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_35a5842a-f5e3-41cd-9ad4-9887af65562b) 2026-02-28 00:44:55.737332 | orchestrator | 2026-02-28 00:44:55.737342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:55.737353 | orchestrator | Saturday 28 February 2026 00:44:54 +0000 (0:00:00.445) 0:00:57.115 ***** 2026-02-28 00:44:55.737364 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_794d6bd6-cdc9-465f-9345-dcdc45cdec57) 2026-02-28 00:44:55.737375 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_794d6bd6-cdc9-465f-9345-dcdc45cdec57) 2026-02-28 00:44:55.737386 | orchestrator | 2026-02-28 00:44:55.737397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:55.737407 | orchestrator | Saturday 28 February 2026 00:44:54 +0000 (0:00:00.443) 0:00:57.558 ***** 2026-02-28 00:44:55.737418 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:44:55.737429 | orchestrator | 2026-02-28 00:44:55.737440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:55.737451 | orchestrator | Saturday 28 February 2026 00:44:55 +0000 (0:00:00.372) 0:00:57.931 ***** 2026-02-28 00:44:55.737461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-28 00:44:55.737472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-28 00:44:55.737483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-28 00:44:55.737494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-28 00:44:55.737504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-28 00:44:55.737515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-28 00:44:55.737526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-28 00:44:55.737537 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-28 00:44:55.737548 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-28 00:44:55.737558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-28 00:44:55.737569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-28 00:44:55.737587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-28 00:45:04.918779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-28 00:45:04.918946 | orchestrator | 2026-02-28 00:45:04.918979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:04.919089 | orchestrator | Saturday 28 February 2026 00:44:55 +0000 (0:00:00.457) 0:00:58.388 ***** 2026-02-28 00:45:04.919104 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.919116 | orchestrator | 2026-02-28 00:45:04.919128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:04.919139 | orchestrator | Saturday 28 February 2026 00:44:55 +0000 (0:00:00.208) 0:00:58.596 ***** 2026-02-28 00:45:04.919150 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.919161 | orchestrator | 2026-02-28 00:45:04.919173 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:04.919184 | orchestrator | Saturday 28 February 2026 00:44:56 +0000 (0:00:00.733) 0:00:59.330 ***** 2026-02-28 00:45:04.919195 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.919230 | orchestrator | 2026-02-28 00:45:04.919241 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:04.919253 | orchestrator | Saturday 28 February 2026 00:44:56 +0000 (0:00:00.203) 0:00:59.533 ***** 2026-02-28 00:45:04.919264 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.919274 | orchestrator | 2026-02-28 00:45:04.919285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:04.919296 | orchestrator | Saturday 28 February 2026 00:44:57 +0000 (0:00:00.221) 0:00:59.755 ***** 2026-02-28 00:45:04.919309 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.919321 | orchestrator | 2026-02-28 00:45:04.919334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:04.919347 | orchestrator | Saturday 28 February 2026 00:44:57 +0000 (0:00:00.211) 0:00:59.966 ***** 2026-02-28 00:45:04.919360 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.919372 | orchestrator | 2026-02-28 00:45:04.919385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:04.919398 | orchestrator | Saturday 28 February 2026 00:44:57 +0000 (0:00:00.214) 0:01:00.181 ***** 2026-02-28 00:45:04.919410 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.919422 | orchestrator | 2026-02-28 00:45:04.919435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:04.919448 | orchestrator | Saturday 28 February 2026 00:44:57 +0000 (0:00:00.206) 0:01:00.388 ***** 2026-02-28 00:45:04.919460 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.919473 | orchestrator | 2026-02-28 00:45:04.919486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:04.919498 | orchestrator | Saturday 28 February 2026 00:44:57 +0000 (0:00:00.201) 0:01:00.589 ***** 2026-02-28 00:45:04.919510 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-28 00:45:04.919524 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-28 00:45:04.919537 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-28 00:45:04.919549 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-28 00:45:04.919562 | orchestrator | 2026-02-28 00:45:04.919575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:04.919587 | orchestrator | Saturday 28 February 2026 00:44:58 +0000 (0:00:00.670) 0:01:01.260 ***** 2026-02-28 00:45:04.919600 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.919613 | orchestrator | 2026-02-28 00:45:04.919626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:04.919639 | orchestrator | Saturday 28 February 2026 00:44:58 +0000 (0:00:00.194) 0:01:01.454 ***** 2026-02-28 00:45:04.919652 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.919665 | orchestrator | 2026-02-28 00:45:04.919693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:04.919717 | orchestrator | Saturday 28 February 2026 00:44:58 +0000 (0:00:00.196) 0:01:01.651 ***** 2026-02-28 00:45:04.919728 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.919739 | orchestrator | 2026-02-28 00:45:04.919749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:45:04.919760 | orchestrator | Saturday 28 February 2026 00:44:59 +0000 (0:00:00.191) 0:01:01.843 ***** 2026-02-28 00:45:04.919771 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.919782 | orchestrator | 2026-02-28 00:45:04.919793 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-28 00:45:04.919804 | orchestrator | Saturday 28 February 2026 00:44:59 +0000 (0:00:00.213) 0:01:02.056 ***** 2026-02-28 00:45:04.919814 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.919825 | orchestrator | 2026-02-28 00:45:04.919836 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-28 00:45:04.919847 | orchestrator | Saturday 28 February 2026 00:44:59 +0000 (0:00:00.341) 0:01:02.397 ***** 2026-02-28 00:45:04.919859 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f012bc14-1358-5d7b-888e-596399f0a0b7'}}) 2026-02-28 00:45:04.919878 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'de70aebc-f344-5246-8655-326adc55aaa0'}}) 2026-02-28 00:45:04.919908 | orchestrator | 2026-02-28 00:45:04.919919 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-28 00:45:04.919930 | orchestrator | Saturday 28 February 2026 00:44:59 +0000 (0:00:00.198) 0:01:02.595 ***** 2026-02-28 00:45:04.919955 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'}) 2026-02-28 00:45:04.919986 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'}) 2026-02-28 00:45:04.920041 | orchestrator | 2026-02-28 00:45:04.920062 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-28 00:45:04.920119 | orchestrator | Saturday 28 February 2026 00:45:01 +0000 (0:00:01.832) 0:01:04.428 ***** 2026-02-28 00:45:04.920132 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:04.920145 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:04.920156 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.920167 | orchestrator | 2026-02-28 00:45:04.920178 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-28 00:45:04.920189 | orchestrator | Saturday 28 February 2026 00:45:01 +0000 (0:00:00.159) 0:01:04.588 ***** 2026-02-28 00:45:04.920200 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'}) 2026-02-28 00:45:04.920211 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'}) 2026-02-28 00:45:04.920222 | orchestrator | 2026-02-28 00:45:04.920233 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-28 00:45:04.920244 | orchestrator | Saturday 28 February 2026 00:45:03 +0000 (0:00:01.329) 0:01:05.917 ***** 2026-02-28 00:45:04.920255 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:04.920266 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:04.920277 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.920288 | orchestrator | 2026-02-28 00:45:04.920312 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-28 00:45:04.920323 | orchestrator | Saturday 28 February 2026 00:45:03 +0000 (0:00:00.164) 0:01:06.082 ***** 2026-02-28 00:45:04.920334 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.920345 | orchestrator | 2026-02-28 00:45:04.920356 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-28 00:45:04.920367 | orchestrator | Saturday 28 February 2026 00:45:03 +0000 (0:00:00.186) 0:01:06.269 ***** 2026-02-28 00:45:04.920378 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:04.920396 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:04.920408 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.920419 | orchestrator | 2026-02-28 00:45:04.920429 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-28 00:45:04.920440 | orchestrator | Saturday 28 February 2026 00:45:03 +0000 (0:00:00.153) 0:01:06.423 ***** 2026-02-28 00:45:04.920460 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.920471 | orchestrator | 2026-02-28 00:45:04.920482 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-28 00:45:04.920492 | orchestrator | Saturday 28 February 2026 00:45:03 +0000 (0:00:00.142) 0:01:06.565 ***** 2026-02-28 00:45:04.920503 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:04.920514 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:04.920525 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.920536 | orchestrator | 2026-02-28 00:45:04.920547 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-28 00:45:04.920558 | orchestrator | Saturday 28 February 2026 00:45:04 +0000 (0:00:00.147) 0:01:06.713 ***** 2026-02-28 00:45:04.920569 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.920579 | orchestrator | 2026-02-28 00:45:04.920590 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-28 00:45:04.920601 | orchestrator | Saturday 28 February 2026 00:45:04 +0000 (0:00:00.163) 0:01:06.877 ***** 2026-02-28 00:45:04.920612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:04.920623 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:04.920634 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:04.920645 | orchestrator | 2026-02-28 00:45:04.920656 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-28 00:45:04.920667 | orchestrator | Saturday 28 February 2026 00:45:04 +0000 (0:00:00.164) 0:01:07.041 ***** 2026-02-28 00:45:04.920678 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:04.920689 | orchestrator | 2026-02-28 00:45:04.920700 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-28 00:45:04.920723 | orchestrator | Saturday 28 February 2026 00:45:04 +0000 (0:00:00.372) 0:01:07.414 ***** 2026-02-28 00:45:04.920741 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:11.269190 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:11.269323 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.269349 | orchestrator | 2026-02-28 00:45:11.269367 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-28 00:45:11.269386 | orchestrator | Saturday 28 February 2026 00:45:04 +0000 (0:00:00.166) 0:01:07.580 ***** 2026-02-28 00:45:11.269397 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:11.269408 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:11.269418 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.269428 | orchestrator | 2026-02-28 00:45:11.269438 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-28 00:45:11.269448 | orchestrator | Saturday 28 February 2026 00:45:05 +0000 (0:00:00.175) 0:01:07.755 ***** 2026-02-28 00:45:11.269458 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:11.269467 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:11.269499 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.269509 | orchestrator | 2026-02-28 00:45:11.269519 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-28 00:45:11.269528 | orchestrator | Saturday 28 February 2026 00:45:05 +0000 (0:00:00.166) 0:01:07.921 ***** 2026-02-28 00:45:11.269538 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.269547 | orchestrator | 2026-02-28 00:45:11.269557 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-28 00:45:11.269566 | orchestrator | Saturday 28 February 2026 00:45:05 +0000 (0:00:00.145) 0:01:08.067 ***** 2026-02-28 00:45:11.269575 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.269585 | orchestrator | 2026-02-28 00:45:11.269595 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-28 00:45:11.269604 | orchestrator | Saturday 28 February 2026 00:45:05 +0000 (0:00:00.129) 0:01:08.197 ***** 2026-02-28 00:45:11.269613 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.269623 | orchestrator | 2026-02-28 00:45:11.269648 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-28 00:45:11.269659 | orchestrator | Saturday 28 February 2026 00:45:05 +0000 (0:00:00.163) 0:01:08.360 ***** 2026-02-28 00:45:11.269670 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:45:11.269681 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-28 00:45:11.269693 | orchestrator | } 2026-02-28 00:45:11.269704 | orchestrator | 2026-02-28 00:45:11.269715 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-28 00:45:11.269726 | orchestrator | Saturday 28 February 2026 00:45:05 +0000 (0:00:00.167) 0:01:08.528 ***** 2026-02-28 00:45:11.269736 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:45:11.269747 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-28 00:45:11.269758 | orchestrator | } 2026-02-28 00:45:11.269770 | orchestrator | 2026-02-28 00:45:11.269780 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-28 00:45:11.269791 | orchestrator | Saturday 28 February 2026 00:45:06 +0000 (0:00:00.150) 0:01:08.679 ***** 2026-02-28 00:45:11.269802 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:45:11.269813 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-28 00:45:11.269823 | orchestrator | } 2026-02-28 00:45:11.269834 | orchestrator | 2026-02-28 00:45:11.269845 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-28 00:45:11.269856 | orchestrator | Saturday 28 February 2026 00:45:06 +0000 (0:00:00.149) 0:01:08.828 ***** 2026-02-28 00:45:11.269866 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:11.269877 | orchestrator | 2026-02-28 00:45:11.269888 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-28 00:45:11.269899 | orchestrator | Saturday 28 February 2026 00:45:06 +0000 (0:00:00.521) 0:01:09.350 ***** 2026-02-28 00:45:11.269910 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:11.269921 | orchestrator | 2026-02-28 00:45:11.269931 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-28 00:45:11.269942 | orchestrator | Saturday 28 February 2026 00:45:07 +0000 (0:00:00.509) 0:01:09.860 ***** 2026-02-28 00:45:11.269952 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:11.269963 | orchestrator | 2026-02-28 00:45:11.269974 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-28 00:45:11.269984 | orchestrator | Saturday 28 February 2026 00:45:07 +0000 (0:00:00.746) 0:01:10.607 ***** 2026-02-28 00:45:11.270073 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:11.270087 | orchestrator | 2026-02-28 00:45:11.270096 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-28 00:45:11.270106 | orchestrator | Saturday 28 February 2026 00:45:08 +0000 (0:00:00.205) 0:01:10.813 ***** 2026-02-28 00:45:11.270115 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270125 | orchestrator | 2026-02-28 00:45:11.270135 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-28 00:45:11.270155 | orchestrator | Saturday 28 February 2026 00:45:08 +0000 (0:00:00.124) 0:01:10.937 ***** 2026-02-28 00:45:11.270164 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270174 | orchestrator | 2026-02-28 00:45:11.270184 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-28 00:45:11.270193 | orchestrator | Saturday 28 February 2026 00:45:08 +0000 (0:00:00.119) 0:01:11.057 ***** 2026-02-28 00:45:11.270203 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:45:11.270212 | orchestrator |  "vgs_report": { 2026-02-28 00:45:11.270222 | orchestrator |  "vg": [] 2026-02-28 00:45:11.270250 | orchestrator |  } 2026-02-28 00:45:11.270261 | orchestrator | } 2026-02-28 00:45:11.270270 | orchestrator | 2026-02-28 00:45:11.270280 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-28 00:45:11.270290 | orchestrator | Saturday 28 February 2026 00:45:08 +0000 (0:00:00.157) 0:01:11.215 ***** 2026-02-28 00:45:11.270299 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270309 | orchestrator | 2026-02-28 00:45:11.270318 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-28 00:45:11.270328 | orchestrator | Saturday 28 February 2026 00:45:08 +0000 (0:00:00.147) 0:01:11.362 ***** 2026-02-28 00:45:11.270337 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270346 | orchestrator | 2026-02-28 00:45:11.270356 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-28 00:45:11.270365 | orchestrator | Saturday 28 February 2026 00:45:08 +0000 (0:00:00.149) 0:01:11.512 ***** 2026-02-28 00:45:11.270375 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270384 | orchestrator | 2026-02-28 00:45:11.270393 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-28 00:45:11.270403 | orchestrator | Saturday 28 February 2026 00:45:08 +0000 (0:00:00.130) 0:01:11.642 ***** 2026-02-28 00:45:11.270412 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270422 | orchestrator | 2026-02-28 00:45:11.270432 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-28 00:45:11.270441 | orchestrator | Saturday 28 February 2026 00:45:09 +0000 (0:00:00.146) 0:01:11.789 ***** 2026-02-28 00:45:11.270451 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270460 | orchestrator | 2026-02-28 00:45:11.270470 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-28 00:45:11.270479 | orchestrator | Saturday 28 February 2026 00:45:09 +0000 (0:00:00.152) 0:01:11.941 ***** 2026-02-28 00:45:11.270488 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270498 | orchestrator | 2026-02-28 00:45:11.270508 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-28 00:45:11.270517 | orchestrator | Saturday 28 February 2026 00:45:09 +0000 (0:00:00.147) 0:01:12.088 ***** 2026-02-28 00:45:11.270526 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270536 | orchestrator | 2026-02-28 00:45:11.270545 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-28 00:45:11.270555 | orchestrator | Saturday 28 February 2026 00:45:09 +0000 (0:00:00.139) 0:01:12.228 ***** 2026-02-28 00:45:11.270564 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270573 | orchestrator | 2026-02-28 00:45:11.270583 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-28 00:45:11.270592 | orchestrator | Saturday 28 February 2026 00:45:09 +0000 (0:00:00.352) 0:01:12.580 ***** 2026-02-28 00:45:11.270602 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270611 | orchestrator | 2026-02-28 00:45:11.270626 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-28 00:45:11.270636 | orchestrator | Saturday 28 February 2026 00:45:10 +0000 (0:00:00.150) 0:01:12.730 ***** 2026-02-28 00:45:11.270646 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270655 | orchestrator | 2026-02-28 00:45:11.270665 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-28 00:45:11.270674 | orchestrator | Saturday 28 February 2026 00:45:10 +0000 (0:00:00.136) 0:01:12.867 ***** 2026-02-28 00:45:11.270690 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270700 | orchestrator | 2026-02-28 00:45:11.270709 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-28 00:45:11.270719 | orchestrator | Saturday 28 February 2026 00:45:10 +0000 (0:00:00.150) 0:01:13.018 ***** 2026-02-28 00:45:11.270728 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270738 | orchestrator | 2026-02-28 00:45:11.270747 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-28 00:45:11.270757 | orchestrator | Saturday 28 February 2026 00:45:10 +0000 (0:00:00.141) 0:01:13.160 ***** 2026-02-28 00:45:11.270766 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270776 | orchestrator | 2026-02-28 00:45:11.270785 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-28 00:45:11.270795 | orchestrator | Saturday 28 February 2026 00:45:10 +0000 (0:00:00.147) 0:01:13.308 ***** 2026-02-28 00:45:11.270804 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270813 | orchestrator | 2026-02-28 00:45:11.270823 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-28 00:45:11.270832 | orchestrator | Saturday 28 February 2026 00:45:10 +0000 (0:00:00.138) 0:01:13.447 ***** 2026-02-28 00:45:11.270842 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:11.270852 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:11.270861 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270871 | orchestrator | 2026-02-28 00:45:11.270880 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-28 00:45:11.270890 | orchestrator | Saturday 28 February 2026 00:45:10 +0000 (0:00:00.161) 0:01:13.609 ***** 2026-02-28 00:45:11.270899 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:11.270909 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:11.270918 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:11.270928 | orchestrator | 2026-02-28 00:45:11.270938 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-28 00:45:11.270947 | orchestrator | Saturday 28 February 2026 00:45:11 +0000 (0:00:00.146) 0:01:13.755 ***** 2026-02-28 00:45:11.270964 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:14.460288 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:14.460374 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:14.460386 | orchestrator | 2026-02-28 00:45:14.460395 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-28 00:45:14.460405 | orchestrator | Saturday 28 February 2026 00:45:11 +0000 (0:00:00.175) 0:01:13.931 ***** 2026-02-28 00:45:14.460414 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:14.460423 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:14.460431 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:14.460439 | orchestrator | 2026-02-28 00:45:14.460447 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-28 00:45:14.460456 | orchestrator | Saturday 28 February 2026 00:45:11 +0000 (0:00:00.158) 0:01:14.090 ***** 2026-02-28 00:45:14.460485 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:14.460493 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:14.460501 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:14.460509 | orchestrator | 2026-02-28 00:45:14.460517 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-28 00:45:14.460525 | orchestrator | Saturday 28 February 2026 00:45:11 +0000 (0:00:00.155) 0:01:14.245 ***** 2026-02-28 00:45:14.460533 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:14.460542 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:14.460550 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:14.460558 | orchestrator | 2026-02-28 00:45:14.460566 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-28 00:45:14.460574 | orchestrator | Saturday 28 February 2026 00:45:11 +0000 (0:00:00.420) 0:01:14.666 ***** 2026-02-28 00:45:14.460582 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:14.460590 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:14.460598 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:14.460607 | orchestrator | 2026-02-28 00:45:14.460615 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-28 00:45:14.460623 | orchestrator | Saturday 28 February 2026 00:45:12 +0000 (0:00:00.165) 0:01:14.831 ***** 2026-02-28 00:45:14.460631 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:14.460639 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:14.460647 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:14.460655 | orchestrator | 2026-02-28 00:45:14.460663 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-28 00:45:14.460671 | orchestrator | Saturday 28 February 2026 00:45:12 +0000 (0:00:00.149) 0:01:14.981 ***** 2026-02-28 00:45:14.460679 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:14.460688 | orchestrator | 2026-02-28 00:45:14.460696 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-28 00:45:14.460704 | orchestrator | Saturday 28 February 2026 00:45:12 +0000 (0:00:00.521) 0:01:15.502 ***** 2026-02-28 00:45:14.460712 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:14.460720 | orchestrator | 2026-02-28 00:45:14.460728 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-28 00:45:14.460736 | orchestrator | Saturday 28 February 2026 00:45:13 +0000 (0:00:00.583) 0:01:16.086 ***** 2026-02-28 00:45:14.460744 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:14.460751 | orchestrator | 2026-02-28 00:45:14.460759 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-28 00:45:14.460767 | orchestrator | Saturday 28 February 2026 00:45:13 +0000 (0:00:00.148) 0:01:16.234 ***** 2026-02-28 00:45:14.460776 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'vg_name': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'}) 2026-02-28 00:45:14.460784 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'vg_name': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'}) 2026-02-28 00:45:14.460798 | orchestrator | 2026-02-28 00:45:14.460806 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-28 00:45:14.460814 | orchestrator | Saturday 28 February 2026 00:45:13 +0000 (0:00:00.203) 0:01:16.438 ***** 2026-02-28 00:45:14.460852 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:14.460862 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:14.460872 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:14.460881 | orchestrator | 2026-02-28 00:45:14.460890 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-28 00:45:14.460899 | orchestrator | Saturday 28 February 2026 00:45:13 +0000 (0:00:00.181) 0:01:16.619 ***** 2026-02-28 00:45:14.460908 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:14.460917 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:14.460926 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:14.460935 | orchestrator | 2026-02-28 00:45:14.460945 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-28 00:45:14.460953 | orchestrator | Saturday 28 February 2026 00:45:14 +0000 (0:00:00.165) 0:01:16.784 ***** 2026-02-28 00:45:14.460963 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'})  2026-02-28 00:45:14.460972 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'})  2026-02-28 00:45:14.460980 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:14.460989 | orchestrator | 2026-02-28 00:45:14.461042 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-28 00:45:14.461056 | orchestrator | Saturday 28 February 2026 00:45:14 +0000 (0:00:00.153) 0:01:16.937 ***** 2026-02-28 00:45:14.461070 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:45:14.461082 | orchestrator |  "lvm_report": { 2026-02-28 00:45:14.461103 | orchestrator |  "lv": [ 2026-02-28 00:45:14.461117 | orchestrator |  { 2026-02-28 00:45:14.461129 | orchestrator |  "lv_name": "osd-block-de70aebc-f344-5246-8655-326adc55aaa0", 2026-02-28 00:45:14.461150 | orchestrator |  "vg_name": "ceph-de70aebc-f344-5246-8655-326adc55aaa0" 2026-02-28 00:45:14.461163 | orchestrator |  }, 2026-02-28 00:45:14.461176 | orchestrator |  { 2026-02-28 00:45:14.461189 | orchestrator |  "lv_name": "osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7", 2026-02-28 00:45:14.461200 | orchestrator |  "vg_name": "ceph-f012bc14-1358-5d7b-888e-596399f0a0b7" 2026-02-28 00:45:14.461212 | orchestrator |  } 2026-02-28 00:45:14.461225 | orchestrator |  ], 2026-02-28 00:45:14.461238 | orchestrator |  "pv": [ 2026-02-28 00:45:14.461250 | orchestrator |  { 2026-02-28 00:45:14.461261 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-28 00:45:14.461273 | orchestrator |  "vg_name": "ceph-f012bc14-1358-5d7b-888e-596399f0a0b7" 2026-02-28 00:45:14.461285 | orchestrator |  }, 2026-02-28 00:45:14.461297 | orchestrator |  { 2026-02-28 00:45:14.461311 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-28 00:45:14.461323 | orchestrator |  "vg_name": "ceph-de70aebc-f344-5246-8655-326adc55aaa0" 2026-02-28 00:45:14.461336 | orchestrator |  } 2026-02-28 00:45:14.461348 | orchestrator |  ] 2026-02-28 00:45:14.461361 | orchestrator |  } 2026-02-28 00:45:14.461375 | orchestrator | } 2026-02-28 00:45:14.461397 | orchestrator | 2026-02-28 00:45:14.461409 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:45:14.461422 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-28 00:45:14.461435 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-28 00:45:14.461448 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-28 00:45:14.461461 | orchestrator | 2026-02-28 00:45:14.461474 | orchestrator | 2026-02-28 00:45:14.461489 | orchestrator | 2026-02-28 00:45:14.461505 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:45:14.461519 | orchestrator | Saturday 28 February 2026 00:45:14 +0000 (0:00:00.156) 0:01:17.094 ***** 2026-02-28 00:45:14.461535 | orchestrator | =============================================================================== 2026-02-28 00:45:14.461549 | orchestrator | Create block VGs -------------------------------------------------------- 5.75s 2026-02-28 00:45:14.461565 | orchestrator | Create block LVs -------------------------------------------------------- 4.28s 2026-02-28 00:45:14.461581 | orchestrator | Add known partitions to the list of available block devices ------------- 1.97s 2026-02-28 00:45:14.461596 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.80s 2026-02-28 00:45:14.461611 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.73s 2026-02-28 00:45:14.461626 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.64s 2026-02-28 00:45:14.461640 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.55s 2026-02-28 00:45:14.461656 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.55s 2026-02-28 00:45:14.461687 | orchestrator | Add known links to the list of available block devices ------------------ 1.44s 2026-02-28 00:45:14.900531 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2026-02-28 00:45:14.900632 | orchestrator | Print LVM report data --------------------------------------------------- 0.99s 2026-02-28 00:45:14.900648 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-02-28 00:45:14.900660 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.89s 2026-02-28 00:45:14.900672 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.83s 2026-02-28 00:45:14.900683 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2026-02-28 00:45:14.900694 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2026-02-28 00:45:14.900706 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.81s 2026-02-28 00:45:14.900717 | orchestrator | Get initial list of available block devices ----------------------------- 0.77s 2026-02-28 00:45:14.900728 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.75s 2026-02-28 00:45:14.900740 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.75s 2026-02-28 00:45:27.442899 | orchestrator | 2026-02-28 00:45:27 | INFO  | Task 5a2549ae-79c7-471f-9a3e-4018b9b8c5b9 (facts) was prepared for execution. 2026-02-28 00:45:27.443091 | orchestrator | 2026-02-28 00:45:27 | INFO  | It takes a moment until task 5a2549ae-79c7-471f-9a3e-4018b9b8c5b9 (facts) has been started and output is visible here. 2026-02-28 00:45:40.503347 | orchestrator | 2026-02-28 00:45:40.503451 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-28 00:45:40.503460 | orchestrator | 2026-02-28 00:45:40.503465 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-28 00:45:40.503471 | orchestrator | Saturday 28 February 2026 00:45:31 +0000 (0:00:00.268) 0:00:00.268 ***** 2026-02-28 00:45:40.503496 | orchestrator | ok: [testbed-manager] 2026-02-28 00:45:40.503503 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:45:40.503508 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:45:40.503513 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:45:40.503518 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:45:40.503523 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:45:40.503528 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:40.503534 | orchestrator | 2026-02-28 00:45:40.503539 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-28 00:45:40.503556 | orchestrator | Saturday 28 February 2026 00:45:32 +0000 (0:00:01.017) 0:00:01.286 ***** 2026-02-28 00:45:40.503562 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:45:40.503568 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:45:40.503573 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:45:40.503578 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:45:40.503583 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:45:40.503588 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:40.503595 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:40.503604 | orchestrator | 2026-02-28 00:45:40.503615 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-28 00:45:40.503624 | orchestrator | 2026-02-28 00:45:40.503633 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:45:40.503641 | orchestrator | Saturday 28 February 2026 00:45:33 +0000 (0:00:01.107) 0:00:02.393 ***** 2026-02-28 00:45:40.503650 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:45:40.503658 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:45:40.503667 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:45:40.503675 | orchestrator | ok: [testbed-manager] 2026-02-28 00:45:40.503684 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:45:40.503692 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:40.503700 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:45:40.503707 | orchestrator | 2026-02-28 00:45:40.503712 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-28 00:45:40.503717 | orchestrator | 2026-02-28 00:45:40.503722 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-28 00:45:40.503728 | orchestrator | Saturday 28 February 2026 00:45:39 +0000 (0:00:05.680) 0:00:08.074 ***** 2026-02-28 00:45:40.503733 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:45:40.503738 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:45:40.503744 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:45:40.503752 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:45:40.503761 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:45:40.503769 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:40.503777 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:40.503785 | orchestrator | 2026-02-28 00:45:40.503794 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:45:40.503800 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:45:40.503807 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:45:40.503812 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:45:40.503817 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:45:40.503822 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:45:40.503827 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:45:40.503832 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:45:40.503842 | orchestrator | 2026-02-28 00:45:40.503847 | orchestrator | 2026-02-28 00:45:40.503853 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:45:40.503858 | orchestrator | Saturday 28 February 2026 00:45:40 +0000 (0:00:00.530) 0:00:08.605 ***** 2026-02-28 00:45:40.503863 | orchestrator | =============================================================================== 2026-02-28 00:45:40.503868 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.68s 2026-02-28 00:45:40.503873 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.11s 2026-02-28 00:45:40.503878 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.02s 2026-02-28 00:45:40.503883 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-02-28 00:45:52.946971 | orchestrator | 2026-02-28 00:45:52 | INFO  | Task 0d9f9ad4-5439-4ea5-9d09-4fcab37406fc (frr) was prepared for execution. 2026-02-28 00:45:52.947129 | orchestrator | 2026-02-28 00:45:52 | INFO  | It takes a moment until task 0d9f9ad4-5439-4ea5-9d09-4fcab37406fc (frr) has been started and output is visible here. 2026-02-28 00:46:19.239490 | orchestrator | 2026-02-28 00:46:19.239599 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-28 00:46:19.239614 | orchestrator | 2026-02-28 00:46:19.239626 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-28 00:46:19.239637 | orchestrator | Saturday 28 February 2026 00:45:57 +0000 (0:00:00.237) 0:00:00.237 ***** 2026-02-28 00:46:19.239649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:46:19.239661 | orchestrator | 2026-02-28 00:46:19.239700 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-28 00:46:19.239712 | orchestrator | Saturday 28 February 2026 00:45:57 +0000 (0:00:00.229) 0:00:00.466 ***** 2026-02-28 00:46:19.239723 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:19.239735 | orchestrator | 2026-02-28 00:46:19.239747 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-28 00:46:19.239758 | orchestrator | Saturday 28 February 2026 00:45:58 +0000 (0:00:01.215) 0:00:01.682 ***** 2026-02-28 00:46:19.239786 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:19.239798 | orchestrator | 2026-02-28 00:46:19.239809 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-28 00:46:19.239820 | orchestrator | Saturday 28 February 2026 00:46:08 +0000 (0:00:10.201) 0:00:11.883 ***** 2026-02-28 00:46:19.239831 | orchestrator | ok: [testbed-manager] 2026-02-28 00:46:19.239842 | orchestrator | 2026-02-28 00:46:19.239853 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-28 00:46:19.239864 | orchestrator | Saturday 28 February 2026 00:46:09 +0000 (0:00:01.065) 0:00:12.949 ***** 2026-02-28 00:46:19.239875 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:19.239886 | orchestrator | 2026-02-28 00:46:19.239897 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-28 00:46:19.239908 | orchestrator | Saturday 28 February 2026 00:46:10 +0000 (0:00:01.025) 0:00:13.975 ***** 2026-02-28 00:46:19.239922 | orchestrator | ok: [testbed-manager] 2026-02-28 00:46:19.239940 | orchestrator | 2026-02-28 00:46:19.239958 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-28 00:46:19.240043 | orchestrator | Saturday 28 February 2026 00:46:12 +0000 (0:00:01.267) 0:00:15.243 ***** 2026-02-28 00:46:19.240058 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:46:19.240071 | orchestrator | 2026-02-28 00:46:19.240084 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-28 00:46:19.240097 | orchestrator | Saturday 28 February 2026 00:46:12 +0000 (0:00:00.136) 0:00:15.379 ***** 2026-02-28 00:46:19.240110 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:46:19.240146 | orchestrator | 2026-02-28 00:46:19.240159 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-28 00:46:19.240170 | orchestrator | Saturday 28 February 2026 00:46:12 +0000 (0:00:00.169) 0:00:15.549 ***** 2026-02-28 00:46:19.240180 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:19.240191 | orchestrator | 2026-02-28 00:46:19.240202 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-28 00:46:19.240213 | orchestrator | Saturday 28 February 2026 00:46:13 +0000 (0:00:00.969) 0:00:16.518 ***** 2026-02-28 00:46:19.240224 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-28 00:46:19.240234 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-28 00:46:19.240247 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-28 00:46:19.240258 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-28 00:46:19.240269 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-28 00:46:19.240279 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-28 00:46:19.240290 | orchestrator | 2026-02-28 00:46:19.240301 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-28 00:46:19.240312 | orchestrator | Saturday 28 February 2026 00:46:15 +0000 (0:00:02.348) 0:00:18.867 ***** 2026-02-28 00:46:19.240322 | orchestrator | ok: [testbed-manager] 2026-02-28 00:46:19.240333 | orchestrator | 2026-02-28 00:46:19.240344 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-28 00:46:19.240355 | orchestrator | Saturday 28 February 2026 00:46:17 +0000 (0:00:01.645) 0:00:20.512 ***** 2026-02-28 00:46:19.240366 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:19.240377 | orchestrator | 2026-02-28 00:46:19.240387 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:46:19.240399 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:46:19.240410 | orchestrator | 2026-02-28 00:46:19.240420 | orchestrator | 2026-02-28 00:46:19.240431 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:46:19.240442 | orchestrator | Saturday 28 February 2026 00:46:18 +0000 (0:00:01.444) 0:00:21.957 ***** 2026-02-28 00:46:19.240452 | orchestrator | =============================================================================== 2026-02-28 00:46:19.240463 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.20s 2026-02-28 00:46:19.240474 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.35s 2026-02-28 00:46:19.240485 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.65s 2026-02-28 00:46:19.240495 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.45s 2026-02-28 00:46:19.240506 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.27s 2026-02-28 00:46:19.240534 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.22s 2026-02-28 00:46:19.240546 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.07s 2026-02-28 00:46:19.240557 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.03s 2026-02-28 00:46:19.240568 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.97s 2026-02-28 00:46:19.240578 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2026-02-28 00:46:19.240589 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.17s 2026-02-28 00:46:19.240599 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-02-28 00:46:19.552200 | orchestrator | 2026-02-28 00:46:19.554539 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Feb 28 00:46:19 UTC 2026 2026-02-28 00:46:19.554572 | orchestrator | 2026-02-28 00:46:21.555289 | orchestrator | 2026-02-28 00:46:21 | INFO  | Collection nutshell is prepared for execution 2026-02-28 00:46:21.555397 | orchestrator | 2026-02-28 00:46:21 | INFO  | A [0] - dotfiles 2026-02-28 00:46:31.740794 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [0] - homer 2026-02-28 00:46:31.740908 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [0] - netdata 2026-02-28 00:46:31.741185 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [0] - openstackclient 2026-02-28 00:46:31.741215 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [0] - phpmyadmin 2026-02-28 00:46:31.741826 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [0] - common 2026-02-28 00:46:31.747815 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [1] -- loadbalancer 2026-02-28 00:46:31.747876 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [2] --- opensearch 2026-02-28 00:46:31.748098 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [2] --- mariadb-ng 2026-02-28 00:46:31.748593 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [3] ---- horizon 2026-02-28 00:46:31.750255 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [3] ---- keystone 2026-02-28 00:46:31.750298 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [4] ----- neutron 2026-02-28 00:46:31.750310 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [5] ------ wait-for-nova 2026-02-28 00:46:31.750323 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [6] ------- octavia 2026-02-28 00:46:31.751701 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [4] ----- barbican 2026-02-28 00:46:31.751748 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [4] ----- designate 2026-02-28 00:46:31.752130 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [4] ----- ironic 2026-02-28 00:46:31.752157 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [4] ----- placement 2026-02-28 00:46:31.752593 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [4] ----- magnum 2026-02-28 00:46:31.754006 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [1] -- openvswitch 2026-02-28 00:46:31.754348 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [2] --- ovn 2026-02-28 00:46:31.755308 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [1] -- memcached 2026-02-28 00:46:31.755778 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [1] -- redis 2026-02-28 00:46:31.756099 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [1] -- rabbitmq-ng 2026-02-28 00:46:31.756872 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [0] - kubernetes 2026-02-28 00:46:31.761879 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [1] -- kubeconfig 2026-02-28 00:46:31.761950 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [1] -- copy-kubeconfig 2026-02-28 00:46:31.762224 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [0] - ceph 2026-02-28 00:46:31.765688 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [1] -- ceph-pools 2026-02-28 00:46:31.765729 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [2] --- copy-ceph-keys 2026-02-28 00:46:31.766187 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [3] ---- cephclient 2026-02-28 00:46:31.766213 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-02-28 00:46:31.766224 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [4] ----- wait-for-keystone 2026-02-28 00:46:31.766235 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [5] ------ kolla-ceph-rgw 2026-02-28 00:46:31.766577 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [5] ------ glance 2026-02-28 00:46:31.766608 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [5] ------ cinder 2026-02-28 00:46:31.766949 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [5] ------ nova 2026-02-28 00:46:31.767148 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [4] ----- prometheus 2026-02-28 00:46:31.767600 | orchestrator | 2026-02-28 00:46:31 | INFO  | A [5] ------ grafana 2026-02-28 00:46:31.986465 | orchestrator | 2026-02-28 00:46:31 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-02-28 00:46:31.986582 | orchestrator | 2026-02-28 00:46:31 | INFO  | Tasks are running in the background 2026-02-28 00:46:35.022469 | orchestrator | 2026-02-28 00:46:35 | INFO  | No task IDs specified, wait for all currently running tasks 2026-02-28 00:46:37.133548 | orchestrator | 2026-02-28 00:46:37 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:46:37.133734 | orchestrator | 2026-02-28 00:46:37 | INFO  | Task e074a7a2-935f-4115-88b8-c36403a05bde is in state STARTED 2026-02-28 00:46:37.134310 | orchestrator | 2026-02-28 00:46:37 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:46:37.136372 | orchestrator | 2026-02-28 00:46:37 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:46:37.137041 | orchestrator | 2026-02-28 00:46:37 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:46:37.137468 | orchestrator | 2026-02-28 00:46:37 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:46:37.138082 | orchestrator | 2026-02-28 00:46:37 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:46:37.138109 | orchestrator | 2026-02-28 00:46:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:40.165914 | orchestrator | 2026-02-28 00:46:40 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:46:40.167427 | orchestrator | 2026-02-28 00:46:40 | INFO  | Task e074a7a2-935f-4115-88b8-c36403a05bde is in state STARTED 2026-02-28 00:46:40.167982 | orchestrator | 2026-02-28 00:46:40 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:46:40.168542 | orchestrator | 2026-02-28 00:46:40 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:46:40.169083 | orchestrator | 2026-02-28 00:46:40 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:46:40.169692 | orchestrator | 2026-02-28 00:46:40 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:46:40.171367 | orchestrator | 2026-02-28 00:46:40 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:46:40.171388 | orchestrator | 2026-02-28 00:46:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:43.250599 | orchestrator | 2026-02-28 00:46:43 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:46:43.253875 | orchestrator | 2026-02-28 00:46:43 | INFO  | Task e074a7a2-935f-4115-88b8-c36403a05bde is in state STARTED 2026-02-28 00:46:43.254253 | orchestrator | 2026-02-28 00:46:43 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:46:43.258715 | orchestrator | 2026-02-28 00:46:43 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:46:43.259201 | orchestrator | 2026-02-28 00:46:43 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:46:43.259723 | orchestrator | 2026-02-28 00:46:43 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:46:43.260494 | orchestrator | 2026-02-28 00:46:43 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:46:43.260546 | orchestrator | 2026-02-28 00:46:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:46.296619 | orchestrator | 2026-02-28 00:46:46 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:46:46.297068 | orchestrator | 2026-02-28 00:46:46 | INFO  | Task e074a7a2-935f-4115-88b8-c36403a05bde is in state STARTED 2026-02-28 00:46:46.297787 | orchestrator | 2026-02-28 00:46:46 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:46:46.298506 | orchestrator | 2026-02-28 00:46:46 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:46:46.299244 | orchestrator | 2026-02-28 00:46:46 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:46:46.299928 | orchestrator | 2026-02-28 00:46:46 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:46:46.300568 | orchestrator | 2026-02-28 00:46:46 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:46:46.300658 | orchestrator | 2026-02-28 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:49.365700 | orchestrator | 2026-02-28 00:46:49 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:46:49.365778 | orchestrator | 2026-02-28 00:46:49 | INFO  | Task e074a7a2-935f-4115-88b8-c36403a05bde is in state STARTED 2026-02-28 00:46:49.366200 | orchestrator | 2026-02-28 00:46:49 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:46:49.366764 | orchestrator | 2026-02-28 00:46:49 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:46:49.367213 | orchestrator | 2026-02-28 00:46:49 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:46:49.368011 | orchestrator | 2026-02-28 00:46:49 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:46:49.368715 | orchestrator | 2026-02-28 00:46:49 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:46:49.368784 | orchestrator | 2026-02-28 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:52.419104 | orchestrator | 2026-02-28 00:46:52 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:46:52.419175 | orchestrator | 2026-02-28 00:46:52 | INFO  | Task e074a7a2-935f-4115-88b8-c36403a05bde is in state STARTED 2026-02-28 00:46:52.419184 | orchestrator | 2026-02-28 00:46:52 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:46:52.419190 | orchestrator | 2026-02-28 00:46:52 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:46:52.419197 | orchestrator | 2026-02-28 00:46:52 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:46:52.419203 | orchestrator | 2026-02-28 00:46:52 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:46:52.419209 | orchestrator | 2026-02-28 00:46:52 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:46:52.419216 | orchestrator | 2026-02-28 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:55.476503 | orchestrator | 2026-02-28 00:46:55 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:46:55.482223 | orchestrator | 2026-02-28 00:46:55 | INFO  | Task e074a7a2-935f-4115-88b8-c36403a05bde is in state STARTED 2026-02-28 00:46:55.483141 | orchestrator | 2026-02-28 00:46:55 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:46:55.484103 | orchestrator | 2026-02-28 00:46:55 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:46:55.487556 | orchestrator | 2026-02-28 00:46:55 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:46:55.488510 | orchestrator | 2026-02-28 00:46:55 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:46:55.489992 | orchestrator | 2026-02-28 00:46:55 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:46:55.491609 | orchestrator | 2026-02-28 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:58.542196 | orchestrator | 2026-02-28 00:46:58 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:46:58.542716 | orchestrator | 2026-02-28 00:46:58 | INFO  | Task e074a7a2-935f-4115-88b8-c36403a05bde is in state STARTED 2026-02-28 00:46:58.544694 | orchestrator | 2026-02-28 00:46:58 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:46:58.544792 | orchestrator | 2026-02-28 00:46:58 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:46:58.549155 | orchestrator | 2026-02-28 00:46:58 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:46:58.549290 | orchestrator | 2026-02-28 00:46:58 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:46:58.550149 | orchestrator | 2026-02-28 00:46:58 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:46:58.550190 | orchestrator | 2026-02-28 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:01.680625 | orchestrator | 2026-02-28 00:47:01.680726 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-02-28 00:47:01.680759 | orchestrator | 2026-02-28 00:47:01.680780 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-02-28 00:47:01.680798 | orchestrator | Saturday 28 February 2026 00:46:44 +0000 (0:00:00.668) 0:00:00.668 ***** 2026-02-28 00:47:01.680815 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:47:01.680834 | orchestrator | changed: [testbed-manager] 2026-02-28 00:47:01.680851 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:47:01.680871 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:47:01.680889 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:47:01.680908 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:47:01.680924 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:47:01.680935 | orchestrator | 2026-02-28 00:47:01.680946 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-02-28 00:47:01.681033 | orchestrator | Saturday 28 February 2026 00:46:48 +0000 (0:00:04.298) 0:00:04.966 ***** 2026-02-28 00:47:01.681046 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-28 00:47:01.681059 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-28 00:47:01.681070 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-28 00:47:01.681081 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-28 00:47:01.681092 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-28 00:47:01.681103 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-28 00:47:01.681114 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-28 00:47:01.681125 | orchestrator | 2026-02-28 00:47:01.681136 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-02-28 00:47:01.681147 | orchestrator | Saturday 28 February 2026 00:46:50 +0000 (0:00:01.365) 0:00:06.331 ***** 2026-02-28 00:47:01.681173 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:46:49.573414', 'end': '2026-02-28 00:46:49.579662', 'delta': '0:00:00.006248', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:47:01.681217 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:46:49.570424', 'end': '2026-02-28 00:46:49.579145', 'delta': '0:00:00.008721', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:47:01.681232 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:46:49.768795', 'end': '2026-02-28 00:46:49.773404', 'delta': '0:00:00.004609', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:47:01.681273 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:46:49.576447', 'end': '2026-02-28 00:46:49.585267', 'delta': '0:00:00.008820', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:47:01.681287 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:46:49.615775', 'end': '2026-02-28 00:46:49.622750', 'delta': '0:00:00.006975', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:47:01.681306 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:46:49.772592', 'end': '2026-02-28 00:46:49.780807', 'delta': '0:00:00.008215', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:47:01.681333 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:46:50.063780', 'end': '2026-02-28 00:46:50.072858', 'delta': '0:00:00.009078', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:47:01.681346 | orchestrator | 2026-02-28 00:47:01.681359 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-02-28 00:47:01.681373 | orchestrator | Saturday 28 February 2026 00:46:52 +0000 (0:00:01.968) 0:00:08.300 ***** 2026-02-28 00:47:01.681386 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-28 00:47:01.681399 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-28 00:47:01.681411 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-28 00:47:01.681423 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-28 00:47:01.681435 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-28 00:47:01.681448 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-28 00:47:01.681461 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-28 00:47:01.681473 | orchestrator | 2026-02-28 00:47:01.681486 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-02-28 00:47:01.681498 | orchestrator | Saturday 28 February 2026 00:46:54 +0000 (0:00:02.226) 0:00:10.527 ***** 2026-02-28 00:47:01.681511 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-02-28 00:47:01.681522 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-02-28 00:47:01.681534 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-02-28 00:47:01.681544 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-02-28 00:47:01.681555 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-02-28 00:47:01.681566 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-02-28 00:47:01.681577 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-02-28 00:47:01.681588 | orchestrator | 2026-02-28 00:47:01.681599 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:47:01.681618 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:47:01.681630 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:47:01.681642 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:47:01.681653 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:47:01.681671 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:47:01.681681 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:47:01.681692 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:47:01.681703 | orchestrator | 2026-02-28 00:47:01.681714 | orchestrator | 2026-02-28 00:47:01.681725 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:47:01.681736 | orchestrator | Saturday 28 February 2026 00:46:57 +0000 (0:00:03.298) 0:00:13.825 ***** 2026-02-28 00:47:01.681747 | orchestrator | =============================================================================== 2026-02-28 00:47:01.681758 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.30s 2026-02-28 00:47:01.681773 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.30s 2026-02-28 00:47:01.681784 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.23s 2026-02-28 00:47:01.681795 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.97s 2026-02-28 00:47:01.681806 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.37s 2026-02-28 00:47:01.681817 | orchestrator | 2026-02-28 00:47:01 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:01.681828 | orchestrator | 2026-02-28 00:47:01 | INFO  | Task e074a7a2-935f-4115-88b8-c36403a05bde is in state SUCCESS 2026-02-28 00:47:01.681839 | orchestrator | 2026-02-28 00:47:01 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:01.681850 | orchestrator | 2026-02-28 00:47:01 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:01.681861 | orchestrator | 2026-02-28 00:47:01 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:01.681872 | orchestrator | 2026-02-28 00:47:01 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:01.681882 | orchestrator | 2026-02-28 00:47:01 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:47:01.681893 | orchestrator | 2026-02-28 00:47:01 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:47:01.681904 | orchestrator | 2026-02-28 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:04.714906 | orchestrator | 2026-02-28 00:47:04 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:04.715561 | orchestrator | 2026-02-28 00:47:04 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:04.718755 | orchestrator | 2026-02-28 00:47:04 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:04.719365 | orchestrator | 2026-02-28 00:47:04 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:04.720550 | orchestrator | 2026-02-28 00:47:04 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:04.722119 | orchestrator | 2026-02-28 00:47:04 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:47:04.722703 | orchestrator | 2026-02-28 00:47:04 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:47:04.722730 | orchestrator | 2026-02-28 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:07.794443 | orchestrator | 2026-02-28 00:47:07 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:07.794510 | orchestrator | 2026-02-28 00:47:07 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:07.795101 | orchestrator | 2026-02-28 00:47:07 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:07.796439 | orchestrator | 2026-02-28 00:47:07 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:07.796932 | orchestrator | 2026-02-28 00:47:07 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:07.797935 | orchestrator | 2026-02-28 00:47:07 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:47:07.801109 | orchestrator | 2026-02-28 00:47:07 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:47:07.801190 | orchestrator | 2026-02-28 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:10.852930 | orchestrator | 2026-02-28 00:47:10 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:10.856688 | orchestrator | 2026-02-28 00:47:10 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:10.862481 | orchestrator | 2026-02-28 00:47:10 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:10.872596 | orchestrator | 2026-02-28 00:47:10 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:10.875776 | orchestrator | 2026-02-28 00:47:10 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:10.876413 | orchestrator | 2026-02-28 00:47:10 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:47:10.876871 | orchestrator | 2026-02-28 00:47:10 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:47:10.876889 | orchestrator | 2026-02-28 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:14.089815 | orchestrator | 2026-02-28 00:47:13 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:14.089923 | orchestrator | 2026-02-28 00:47:13 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:14.090086 | orchestrator | 2026-02-28 00:47:13 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:14.090118 | orchestrator | 2026-02-28 00:47:13 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:14.090137 | orchestrator | 2026-02-28 00:47:13 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:14.090154 | orchestrator | 2026-02-28 00:47:13 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:47:14.090166 | orchestrator | 2026-02-28 00:47:13 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:47:14.090177 | orchestrator | 2026-02-28 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:17.052117 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:17.059117 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:17.059202 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:17.059216 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:17.063690 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:17.065090 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:47:17.067322 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:47:17.067375 | orchestrator | 2026-02-28 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:20.138424 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:20.146447 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:20.150213 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:20.153273 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:20.158218 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:20.163292 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:47:20.167331 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:47:20.167399 | orchestrator | 2026-02-28 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:23.372300 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:23.372353 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:23.372364 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:23.372373 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:23.372382 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:23.372391 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:47:23.372400 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state STARTED 2026-02-28 00:47:23.372409 | orchestrator | 2026-02-28 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:26.636427 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:26.636536 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:26.636553 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:26.636564 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:26.636576 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:26.636587 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:47:26.636598 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task 55dcd8b9-f178-40b0-a191-aa53d66a4a0f is in state SUCCESS 2026-02-28 00:47:26.636609 | orchestrator | 2026-02-28 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:29.394636 | orchestrator | 2026-02-28 00:47:29 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:29.395547 | orchestrator | 2026-02-28 00:47:29 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:29.403601 | orchestrator | 2026-02-28 00:47:29 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:29.403665 | orchestrator | 2026-02-28 00:47:29 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:29.403677 | orchestrator | 2026-02-28 00:47:29 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:29.403688 | orchestrator | 2026-02-28 00:47:29 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:47:29.403700 | orchestrator | 2026-02-28 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:32.471143 | orchestrator | 2026-02-28 00:47:32 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:32.471318 | orchestrator | 2026-02-28 00:47:32 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:32.471347 | orchestrator | 2026-02-28 00:47:32 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:32.471369 | orchestrator | 2026-02-28 00:47:32 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:32.471388 | orchestrator | 2026-02-28 00:47:32 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:32.471408 | orchestrator | 2026-02-28 00:47:32 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:47:32.471428 | orchestrator | 2026-02-28 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:35.549658 | orchestrator | 2026-02-28 00:47:35 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:35.549767 | orchestrator | 2026-02-28 00:47:35 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:35.549783 | orchestrator | 2026-02-28 00:47:35 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:35.549822 | orchestrator | 2026-02-28 00:47:35 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:35.557421 | orchestrator | 2026-02-28 00:47:35 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:35.557508 | orchestrator | 2026-02-28 00:47:35 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state STARTED 2026-02-28 00:47:35.557521 | orchestrator | 2026-02-28 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:38.668576 | orchestrator | 2026-02-28 00:47:38 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:38.669350 | orchestrator | 2026-02-28 00:47:38 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:38.682874 | orchestrator | 2026-02-28 00:47:38 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:38.686525 | orchestrator | 2026-02-28 00:47:38 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:38.687052 | orchestrator | 2026-02-28 00:47:38 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:38.688395 | orchestrator | 2026-02-28 00:47:38 | INFO  | Task 67d90b90-8f86-4b41-9779-1ce99d7e0c47 is in state SUCCESS 2026-02-28 00:47:38.688504 | orchestrator | 2026-02-28 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:41.748367 | orchestrator | 2026-02-28 00:47:41 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:41.750213 | orchestrator | 2026-02-28 00:47:41 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:41.752109 | orchestrator | 2026-02-28 00:47:41 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:41.752878 | orchestrator | 2026-02-28 00:47:41 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:41.756252 | orchestrator | 2026-02-28 00:47:41 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:41.756294 | orchestrator | 2026-02-28 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:44.896332 | orchestrator | 2026-02-28 00:47:44 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:44.896447 | orchestrator | 2026-02-28 00:47:44 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:44.897555 | orchestrator | 2026-02-28 00:47:44 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:44.898746 | orchestrator | 2026-02-28 00:47:44 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:44.899680 | orchestrator | 2026-02-28 00:47:44 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:44.899713 | orchestrator | 2026-02-28 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:47.998708 | orchestrator | 2026-02-28 00:47:47 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:47.998812 | orchestrator | 2026-02-28 00:47:47 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:47.998827 | orchestrator | 2026-02-28 00:47:47 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:48.012845 | orchestrator | 2026-02-28 00:47:48 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:48.012917 | orchestrator | 2026-02-28 00:47:48 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:48.012928 | orchestrator | 2026-02-28 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:51.155742 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:51.155829 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:51.155840 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:51.155848 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:51.155855 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:51.155919 | orchestrator | 2026-02-28 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:54.270666 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:54.270917 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:54.270987 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:54.271000 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:54.271025 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:54.271060 | orchestrator | 2026-02-28 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:57.371161 | orchestrator | 2026-02-28 00:47:57 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:47:57.371266 | orchestrator | 2026-02-28 00:47:57 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:47:57.371283 | orchestrator | 2026-02-28 00:47:57 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:47:57.371295 | orchestrator | 2026-02-28 00:47:57 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:47:57.371306 | orchestrator | 2026-02-28 00:47:57 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:47:57.371318 | orchestrator | 2026-02-28 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:00.484613 | orchestrator | 2026-02-28 00:48:00 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:00.486661 | orchestrator | 2026-02-28 00:48:00 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:00.487821 | orchestrator | 2026-02-28 00:48:00 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:48:00.490900 | orchestrator | 2026-02-28 00:48:00 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:00.493034 | orchestrator | 2026-02-28 00:48:00 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:48:00.493090 | orchestrator | 2026-02-28 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:03.773102 | orchestrator | 2026-02-28 00:48:03 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:03.773215 | orchestrator | 2026-02-28 00:48:03 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:03.773669 | orchestrator | 2026-02-28 00:48:03 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:48:03.774288 | orchestrator | 2026-02-28 00:48:03 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:03.777189 | orchestrator | 2026-02-28 00:48:03 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:48:03.777261 | orchestrator | 2026-02-28 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:06.934344 | orchestrator | 2026-02-28 00:48:06 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:06.935388 | orchestrator | 2026-02-28 00:48:06 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:06.941039 | orchestrator | 2026-02-28 00:48:06 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:48:06.943282 | orchestrator | 2026-02-28 00:48:06 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:06.946736 | orchestrator | 2026-02-28 00:48:06 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:48:06.946996 | orchestrator | 2026-02-28 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:10.053477 | orchestrator | 2026-02-28 00:48:10 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:10.055537 | orchestrator | 2026-02-28 00:48:10 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:10.067671 | orchestrator | 2026-02-28 00:48:10 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:48:10.069660 | orchestrator | 2026-02-28 00:48:10 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:10.071104 | orchestrator | 2026-02-28 00:48:10 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:48:10.073529 | orchestrator | 2026-02-28 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:13.171499 | orchestrator | 2026-02-28 00:48:13 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:13.175830 | orchestrator | 2026-02-28 00:48:13 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:13.179402 | orchestrator | 2026-02-28 00:48:13 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state STARTED 2026-02-28 00:48:13.180637 | orchestrator | 2026-02-28 00:48:13 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:13.183389 | orchestrator | 2026-02-28 00:48:13 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:48:13.183458 | orchestrator | 2026-02-28 00:48:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:16.286876 | orchestrator | 2026-02-28 00:48:16 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:16.289157 | orchestrator | 2026-02-28 00:48:16 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:16.290843 | orchestrator | 2026-02-28 00:48:16.290881 | orchestrator | 2026-02-28 00:48:16.290894 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-02-28 00:48:16.290907 | orchestrator | 2026-02-28 00:48:16.290948 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-02-28 00:48:16.290961 | orchestrator | Saturday 28 February 2026 00:46:45 +0000 (0:00:00.926) 0:00:00.926 ***** 2026-02-28 00:48:16.290973 | orchestrator | ok: [testbed-manager] => { 2026-02-28 00:48:16.290986 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-02-28 00:48:16.291000 | orchestrator | } 2026-02-28 00:48:16.291011 | orchestrator | 2026-02-28 00:48:16.291022 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-02-28 00:48:16.291034 | orchestrator | Saturday 28 February 2026 00:46:45 +0000 (0:00:00.346) 0:00:01.273 ***** 2026-02-28 00:48:16.291045 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:16.291056 | orchestrator | 2026-02-28 00:48:16.291068 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-02-28 00:48:16.291079 | orchestrator | Saturday 28 February 2026 00:46:47 +0000 (0:00:01.747) 0:00:03.021 ***** 2026-02-28 00:48:16.291090 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-02-28 00:48:16.291101 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-02-28 00:48:16.291112 | orchestrator | 2026-02-28 00:48:16.291124 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-02-28 00:48:16.291135 | orchestrator | Saturday 28 February 2026 00:46:49 +0000 (0:00:01.814) 0:00:04.835 ***** 2026-02-28 00:48:16.291146 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:16.291157 | orchestrator | 2026-02-28 00:48:16.291168 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-02-28 00:48:16.291179 | orchestrator | Saturday 28 February 2026 00:46:53 +0000 (0:00:03.884) 0:00:08.720 ***** 2026-02-28 00:48:16.291190 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:16.291201 | orchestrator | 2026-02-28 00:48:16.291212 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-02-28 00:48:16.291225 | orchestrator | Saturday 28 February 2026 00:46:55 +0000 (0:00:02.605) 0:00:11.326 ***** 2026-02-28 00:48:16.291236 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-02-28 00:48:16.291271 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:16.291283 | orchestrator | 2026-02-28 00:48:16.291294 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-02-28 00:48:16.291305 | orchestrator | Saturday 28 February 2026 00:47:21 +0000 (0:00:25.951) 0:00:37.277 ***** 2026-02-28 00:48:16.291316 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:16.291326 | orchestrator | 2026-02-28 00:48:16.291337 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:48:16.291348 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:16.291360 | orchestrator | 2026-02-28 00:48:16.291371 | orchestrator | 2026-02-28 00:48:16.291381 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:48:16.291392 | orchestrator | Saturday 28 February 2026 00:47:25 +0000 (0:00:03.531) 0:00:40.808 ***** 2026-02-28 00:48:16.291403 | orchestrator | =============================================================================== 2026-02-28 00:48:16.291414 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.95s 2026-02-28 00:48:16.291425 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.88s 2026-02-28 00:48:16.291436 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.53s 2026-02-28 00:48:16.291449 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.61s 2026-02-28 00:48:16.291462 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.81s 2026-02-28 00:48:16.291474 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.75s 2026-02-28 00:48:16.291486 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.35s 2026-02-28 00:48:16.291498 | orchestrator | 2026-02-28 00:48:16.291510 | orchestrator | 2026-02-28 00:48:16.291523 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-28 00:48:16.291535 | orchestrator | 2026-02-28 00:48:16.291547 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-28 00:48:16.291560 | orchestrator | Saturday 28 February 2026 00:46:47 +0000 (0:00:00.714) 0:00:00.714 ***** 2026-02-28 00:48:16.291589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-28 00:48:16.291603 | orchestrator | 2026-02-28 00:48:16.291616 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-28 00:48:16.291633 | orchestrator | Saturday 28 February 2026 00:46:47 +0000 (0:00:00.532) 0:00:01.247 ***** 2026-02-28 00:48:16.291652 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-28 00:48:16.291670 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-28 00:48:16.291689 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-28 00:48:16.291707 | orchestrator | 2026-02-28 00:48:16.291725 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-28 00:48:16.291743 | orchestrator | Saturday 28 February 2026 00:46:50 +0000 (0:00:02.541) 0:00:03.789 ***** 2026-02-28 00:48:16.291762 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:16.291781 | orchestrator | 2026-02-28 00:48:16.291800 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-28 00:48:16.291817 | orchestrator | Saturday 28 February 2026 00:46:53 +0000 (0:00:03.434) 0:00:07.224 ***** 2026-02-28 00:48:16.291850 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-28 00:48:16.291868 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:16.291886 | orchestrator | 2026-02-28 00:48:16.291905 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-28 00:48:16.291945 | orchestrator | Saturday 28 February 2026 00:47:28 +0000 (0:00:35.170) 0:00:42.395 ***** 2026-02-28 00:48:16.291956 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:16.291980 | orchestrator | 2026-02-28 00:48:16.291991 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-28 00:48:16.292002 | orchestrator | Saturday 28 February 2026 00:47:30 +0000 (0:00:01.780) 0:00:44.175 ***** 2026-02-28 00:48:16.292013 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:16.292024 | orchestrator | 2026-02-28 00:48:16.292035 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-28 00:48:16.292046 | orchestrator | Saturday 28 February 2026 00:47:31 +0000 (0:00:00.994) 0:00:45.170 ***** 2026-02-28 00:48:16.292057 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:16.292068 | orchestrator | 2026-02-28 00:48:16.292079 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-28 00:48:16.292090 | orchestrator | Saturday 28 February 2026 00:47:34 +0000 (0:00:02.658) 0:00:47.828 ***** 2026-02-28 00:48:16.292101 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:16.292112 | orchestrator | 2026-02-28 00:48:16.292122 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-28 00:48:16.292133 | orchestrator | Saturday 28 February 2026 00:47:35 +0000 (0:00:01.127) 0:00:48.956 ***** 2026-02-28 00:48:16.292144 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:16.292154 | orchestrator | 2026-02-28 00:48:16.292165 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-28 00:48:16.292176 | orchestrator | Saturday 28 February 2026 00:47:36 +0000 (0:00:01.721) 0:00:50.677 ***** 2026-02-28 00:48:16.292187 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:16.292197 | orchestrator | 2026-02-28 00:48:16.292208 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:48:16.292219 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:16.292230 | orchestrator | 2026-02-28 00:48:16.292241 | orchestrator | 2026-02-28 00:48:16.292252 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:48:16.292263 | orchestrator | Saturday 28 February 2026 00:47:37 +0000 (0:00:00.635) 0:00:51.313 ***** 2026-02-28 00:48:16.292273 | orchestrator | =============================================================================== 2026-02-28 00:48:16.292284 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.17s 2026-02-28 00:48:16.292295 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.44s 2026-02-28 00:48:16.292305 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.66s 2026-02-28 00:48:16.292316 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.54s 2026-02-28 00:48:16.292326 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.78s 2026-02-28 00:48:16.292337 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.72s 2026-02-28 00:48:16.292348 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.13s 2026-02-28 00:48:16.292358 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.99s 2026-02-28 00:48:16.292369 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.64s 2026-02-28 00:48:16.292380 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.53s 2026-02-28 00:48:16.292390 | orchestrator | 2026-02-28 00:48:16.292401 | orchestrator | 2026-02-28 00:48:16.292412 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-02-28 00:48:16.292426 | orchestrator | 2026-02-28 00:48:16.292444 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-02-28 00:48:16.292464 | orchestrator | Saturday 28 February 2026 00:47:04 +0000 (0:00:00.281) 0:00:00.281 ***** 2026-02-28 00:48:16.292482 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:16.292500 | orchestrator | 2026-02-28 00:48:16.292519 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-02-28 00:48:16.292538 | orchestrator | Saturday 28 February 2026 00:47:06 +0000 (0:00:01.893) 0:00:02.175 ***** 2026-02-28 00:48:16.292563 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-02-28 00:48:16.292574 | orchestrator | 2026-02-28 00:48:16.292585 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-02-28 00:48:16.292597 | orchestrator | Saturday 28 February 2026 00:47:07 +0000 (0:00:00.744) 0:00:02.920 ***** 2026-02-28 00:48:16.292607 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:16.292618 | orchestrator | 2026-02-28 00:48:16.292629 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-02-28 00:48:16.292640 | orchestrator | Saturday 28 February 2026 00:47:07 +0000 (0:00:00.945) 0:00:03.865 ***** 2026-02-28 00:48:16.292651 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-02-28 00:48:16.292662 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:16.292672 | orchestrator | 2026-02-28 00:48:16.292683 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-02-28 00:48:16.292694 | orchestrator | Saturday 28 February 2026 00:48:08 +0000 (0:01:00.696) 0:01:04.562 ***** 2026-02-28 00:48:16.292705 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:16.292716 | orchestrator | 2026-02-28 00:48:16.292727 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:48:16.292738 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:16.292749 | orchestrator | 2026-02-28 00:48:16.292759 | orchestrator | 2026-02-28 00:48:16.292770 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:48:16.292791 | orchestrator | Saturday 28 February 2026 00:48:12 +0000 (0:00:04.242) 0:01:08.804 ***** 2026-02-28 00:48:16.292811 | orchestrator | =============================================================================== 2026-02-28 00:48:16.292829 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 60.70s 2026-02-28 00:48:16.292846 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.24s 2026-02-28 00:48:16.292862 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.89s 2026-02-28 00:48:16.292879 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 0.95s 2026-02-28 00:48:16.292896 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.74s 2026-02-28 00:48:16.293115 | orchestrator | 2026-02-28 00:48:16 | INFO  | Task cc7ccf94-230d-453c-a171-f0aae0dbbc96 is in state SUCCESS 2026-02-28 00:48:16.293571 | orchestrator | 2026-02-28 00:48:16 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:16.295069 | orchestrator | 2026-02-28 00:48:16 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:48:16.295105 | orchestrator | 2026-02-28 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:19.349774 | orchestrator | 2026-02-28 00:48:19 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:19.351327 | orchestrator | 2026-02-28 00:48:19 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:19.352995 | orchestrator | 2026-02-28 00:48:19 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:19.354783 | orchestrator | 2026-02-28 00:48:19 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:48:19.354890 | orchestrator | 2026-02-28 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:22.390192 | orchestrator | 2026-02-28 00:48:22 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:22.390691 | orchestrator | 2026-02-28 00:48:22 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:22.392760 | orchestrator | 2026-02-28 00:48:22 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:22.393260 | orchestrator | 2026-02-28 00:48:22 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:48:22.393838 | orchestrator | 2026-02-28 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:25.445009 | orchestrator | 2026-02-28 00:48:25 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:25.446369 | orchestrator | 2026-02-28 00:48:25 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:25.448164 | orchestrator | 2026-02-28 00:48:25 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:25.450684 | orchestrator | 2026-02-28 00:48:25 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state STARTED 2026-02-28 00:48:25.450724 | orchestrator | 2026-02-28 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:28.545021 | orchestrator | 2026-02-28 00:48:28 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:28.547075 | orchestrator | 2026-02-28 00:48:28 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:28.551869 | orchestrator | 2026-02-28 00:48:28 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:28.554193 | orchestrator | 2026-02-28 00:48:28.554232 | orchestrator | 2026-02-28 00:48:28.554244 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:48:28.554256 | orchestrator | 2026-02-28 00:48:28.554285 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:48:28.554297 | orchestrator | Saturday 28 February 2026 00:46:46 +0000 (0:00:01.462) 0:00:01.462 ***** 2026-02-28 00:48:28.554309 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-28 00:48:28.554318 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-28 00:48:28.554324 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-28 00:48:28.554331 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-28 00:48:28.554337 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-28 00:48:28.554343 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-28 00:48:28.554349 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-28 00:48:28.554355 | orchestrator | 2026-02-28 00:48:28.554362 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-28 00:48:28.554368 | orchestrator | 2026-02-28 00:48:28.554374 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-28 00:48:28.554380 | orchestrator | Saturday 28 February 2026 00:46:48 +0000 (0:00:02.017) 0:00:03.480 ***** 2026-02-28 00:48:28.554398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:48:28.554411 | orchestrator | 2026-02-28 00:48:28.554417 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-28 00:48:28.554461 | orchestrator | Saturday 28 February 2026 00:46:49 +0000 (0:00:01.429) 0:00:04.910 ***** 2026-02-28 00:48:28.554470 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:28.554478 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:28.554484 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:28.554490 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:28.554497 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:28.554503 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:28.554509 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:28.554515 | orchestrator | 2026-02-28 00:48:28.554521 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-28 00:48:28.554547 | orchestrator | Saturday 28 February 2026 00:46:52 +0000 (0:00:03.032) 0:00:07.942 ***** 2026-02-28 00:48:28.554554 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:28.554560 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:28.554566 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:28.554572 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:28.554578 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:28.554584 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:28.554590 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:28.554596 | orchestrator | 2026-02-28 00:48:28.554603 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-28 00:48:28.554609 | orchestrator | Saturday 28 February 2026 00:46:55 +0000 (0:00:03.330) 0:00:11.273 ***** 2026-02-28 00:48:28.554615 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:28.554621 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:28.554627 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:28.554634 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:28.554640 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:28.554646 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:28.554652 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:28.554658 | orchestrator | 2026-02-28 00:48:28.554664 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-28 00:48:28.554670 | orchestrator | Saturday 28 February 2026 00:46:59 +0000 (0:00:03.340) 0:00:14.614 ***** 2026-02-28 00:48:28.554677 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:28.554683 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:28.554689 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:28.554695 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:28.554701 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:28.554707 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:28.554733 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:28.554740 | orchestrator | 2026-02-28 00:48:28.554747 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-28 00:48:28.554753 | orchestrator | Saturday 28 February 2026 00:47:11 +0000 (0:00:11.885) 0:00:26.499 ***** 2026-02-28 00:48:28.554759 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:28.554765 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:28.554771 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:28.554777 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:28.554783 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:28.554791 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:28.554798 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:28.554805 | orchestrator | 2026-02-28 00:48:28.554812 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-28 00:48:28.554819 | orchestrator | Saturday 28 February 2026 00:47:52 +0000 (0:00:41.599) 0:01:08.099 ***** 2026-02-28 00:48:28.554827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:48:28.554836 | orchestrator | 2026-02-28 00:48:28.554843 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-28 00:48:28.554851 | orchestrator | Saturday 28 February 2026 00:47:54 +0000 (0:00:01.862) 0:01:09.961 ***** 2026-02-28 00:48:28.554858 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-28 00:48:28.554865 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-28 00:48:28.554872 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-28 00:48:28.554879 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-28 00:48:28.554898 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-28 00:48:28.554925 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-28 00:48:28.554932 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-28 00:48:28.554951 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-28 00:48:28.554958 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-28 00:48:28.554965 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-28 00:48:28.554972 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-28 00:48:28.554979 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-28 00:48:28.554986 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-28 00:48:28.554992 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-28 00:48:28.554998 | orchestrator | 2026-02-28 00:48:28.555004 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-28 00:48:28.555012 | orchestrator | Saturday 28 February 2026 00:48:00 +0000 (0:00:05.656) 0:01:15.618 ***** 2026-02-28 00:48:28.555018 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:28.555024 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:28.555030 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:28.555036 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:28.555043 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:28.555049 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:28.555055 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:28.555061 | orchestrator | 2026-02-28 00:48:28.555067 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-28 00:48:28.555073 | orchestrator | Saturday 28 February 2026 00:48:01 +0000 (0:00:01.531) 0:01:17.150 ***** 2026-02-28 00:48:28.555079 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:28.555085 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:28.555092 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:28.555098 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:28.555104 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:28.555110 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:28.555116 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:28.555122 | orchestrator | 2026-02-28 00:48:28.555128 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-28 00:48:28.555135 | orchestrator | Saturday 28 February 2026 00:48:03 +0000 (0:00:01.698) 0:01:18.849 ***** 2026-02-28 00:48:28.555141 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:28.555147 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:28.555153 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:28.555159 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:28.555165 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:28.555171 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:28.555177 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:28.555183 | orchestrator | 2026-02-28 00:48:28.555189 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-28 00:48:28.555195 | orchestrator | Saturday 28 February 2026 00:48:05 +0000 (0:00:01.854) 0:01:20.703 ***** 2026-02-28 00:48:28.555202 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:28.555208 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:28.555213 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:28.555220 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:28.555225 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:28.555232 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:28.555238 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:28.555244 | orchestrator | 2026-02-28 00:48:28.555250 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-28 00:48:28.555256 | orchestrator | Saturday 28 February 2026 00:48:08 +0000 (0:00:03.044) 0:01:23.748 ***** 2026-02-28 00:48:28.555262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-28 00:48:28.555271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:48:28.555277 | orchestrator | 2026-02-28 00:48:28.555284 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-28 00:48:28.555294 | orchestrator | Saturday 28 February 2026 00:48:10 +0000 (0:00:02.621) 0:01:26.369 ***** 2026-02-28 00:48:28.555300 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:28.555306 | orchestrator | 2026-02-28 00:48:28.555313 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-28 00:48:28.555319 | orchestrator | Saturday 28 February 2026 00:48:14 +0000 (0:00:03.934) 0:01:30.303 ***** 2026-02-28 00:48:28.555325 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:28.555331 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:28.555337 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:28.555343 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:28.555349 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:28.555355 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:28.555361 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:28.555367 | orchestrator | 2026-02-28 00:48:28.555373 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:48:28.555379 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:28.555388 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:28.555394 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:28.555400 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:28.555411 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:28.555421 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:28.555427 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:28.555433 | orchestrator | 2026-02-28 00:48:28.555440 | orchestrator | 2026-02-28 00:48:28.555446 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:48:28.555452 | orchestrator | Saturday 28 February 2026 00:48:26 +0000 (0:00:11.613) 0:01:41.917 ***** 2026-02-28 00:48:28.555458 | orchestrator | =============================================================================== 2026-02-28 00:48:28.555484 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 41.60s 2026-02-28 00:48:28.555491 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.89s 2026-02-28 00:48:28.555497 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.61s 2026-02-28 00:48:28.555503 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.66s 2026-02-28 00:48:28.555509 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.93s 2026-02-28 00:48:28.555515 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.34s 2026-02-28 00:48:28.555521 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.32s 2026-02-28 00:48:28.555527 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.04s 2026-02-28 00:48:28.555533 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.04s 2026-02-28 00:48:28.555539 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.62s 2026-02-28 00:48:28.555545 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.02s 2026-02-28 00:48:28.555552 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.86s 2026-02-28 00:48:28.555558 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.85s 2026-02-28 00:48:28.555568 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.70s 2026-02-28 00:48:28.555574 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.53s 2026-02-28 00:48:28.555581 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.43s 2026-02-28 00:48:28.555587 | orchestrator | 2026-02-28 00:48:28 | INFO  | Task 8fe4d8ab-fd6b-4979-81e1-ef20d393e6e2 is in state SUCCESS 2026-02-28 00:48:28.555593 | orchestrator | 2026-02-28 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:31.625248 | orchestrator | 2026-02-28 00:48:31 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:31.626657 | orchestrator | 2026-02-28 00:48:31 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:31.630588 | orchestrator | 2026-02-28 00:48:31 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:31.631810 | orchestrator | 2026-02-28 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:34.682595 | orchestrator | 2026-02-28 00:48:34 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:34.684334 | orchestrator | 2026-02-28 00:48:34 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:34.688793 | orchestrator | 2026-02-28 00:48:34 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:34.688864 | orchestrator | 2026-02-28 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:37.749251 | orchestrator | 2026-02-28 00:48:37 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:37.749364 | orchestrator | 2026-02-28 00:48:37 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:37.750242 | orchestrator | 2026-02-28 00:48:37 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:37.750281 | orchestrator | 2026-02-28 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:40.800211 | orchestrator | 2026-02-28 00:48:40 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:40.800717 | orchestrator | 2026-02-28 00:48:40 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:40.803054 | orchestrator | 2026-02-28 00:48:40 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:40.803126 | orchestrator | 2026-02-28 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:43.868726 | orchestrator | 2026-02-28 00:48:43 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:43.868817 | orchestrator | 2026-02-28 00:48:43 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:43.869910 | orchestrator | 2026-02-28 00:48:43 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:43.870105 | orchestrator | 2026-02-28 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:46.932622 | orchestrator | 2026-02-28 00:48:46 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:46.932692 | orchestrator | 2026-02-28 00:48:46 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:46.932699 | orchestrator | 2026-02-28 00:48:46 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:46.932704 | orchestrator | 2026-02-28 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:49.970613 | orchestrator | 2026-02-28 00:48:49 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:49.972573 | orchestrator | 2026-02-28 00:48:49 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:49.974271 | orchestrator | 2026-02-28 00:48:49 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:49.974410 | orchestrator | 2026-02-28 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:53.012181 | orchestrator | 2026-02-28 00:48:53 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:53.012264 | orchestrator | 2026-02-28 00:48:53 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:53.013041 | orchestrator | 2026-02-28 00:48:53 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:53.013146 | orchestrator | 2026-02-28 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:56.053747 | orchestrator | 2026-02-28 00:48:56 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:56.055587 | orchestrator | 2026-02-28 00:48:56 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:56.057221 | orchestrator | 2026-02-28 00:48:56 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:56.057449 | orchestrator | 2026-02-28 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:59.094858 | orchestrator | 2026-02-28 00:48:59 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state STARTED 2026-02-28 00:48:59.096803 | orchestrator | 2026-02-28 00:48:59 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:48:59.098965 | orchestrator | 2026-02-28 00:48:59 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:48:59.099025 | orchestrator | 2026-02-28 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:02.148699 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task fe70e6fe-b3e7-41e2-a5ac-192ff13cc3ad is in state SUCCESS 2026-02-28 00:49:02.151903 | orchestrator | 2026-02-28 00:49:02.151986 | orchestrator | 2026-02-28 00:49:02.151999 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-28 00:49:02.152010 | orchestrator | 2026-02-28 00:49:02.152021 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-28 00:49:02.152031 | orchestrator | Saturday 28 February 2026 00:46:37 +0000 (0:00:00.333) 0:00:00.333 ***** 2026-02-28 00:49:02.152043 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:49:02.152054 | orchestrator | 2026-02-28 00:49:02.152064 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-28 00:49:02.152074 | orchestrator | Saturday 28 February 2026 00:46:38 +0000 (0:00:01.354) 0:00:01.687 ***** 2026-02-28 00:49:02.152084 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:49:02.152094 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:49:02.152104 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:49:02.152114 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:49:02.152124 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:49:02.152133 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:49:02.152143 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:49:02.152173 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:49:02.152183 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:49:02.152193 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:49:02.152203 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:49:02.152220 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:49:02.152231 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:49:02.152241 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:49:02.152250 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:49:02.152260 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:49:02.152270 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:49:02.152279 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:49:02.152289 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:49:02.152299 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:49:02.152309 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:49:02.152318 | orchestrator | 2026-02-28 00:49:02.152328 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-28 00:49:02.152338 | orchestrator | Saturday 28 February 2026 00:46:43 +0000 (0:00:04.273) 0:00:05.961 ***** 2026-02-28 00:49:02.152348 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:49:02.152360 | orchestrator | 2026-02-28 00:49:02.152369 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-28 00:49:02.152379 | orchestrator | Saturday 28 February 2026 00:46:44 +0000 (0:00:01.449) 0:00:07.410 ***** 2026-02-28 00:49:02.152393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.152407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.152432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.152452 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.152469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.152482 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.152493 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.152506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.152518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.152537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.152556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.152568 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.152586 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.152605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.152627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.152640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.152657 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.152674 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.152685 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.152695 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.152709 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.152720 | orchestrator | 2026-02-28 00:49:02.152730 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-28 00:49:02.152740 | orchestrator | Saturday 28 February 2026 00:46:49 +0000 (0:00:05.112) 0:00:12.523 ***** 2026-02-28 00:49:02.152750 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:02.152761 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.152771 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.152782 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:49:02.152799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:02.152816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.152826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.152845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:02.152856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.152866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.152876 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:49:02.152886 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:49:02.152896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:02.152907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.152935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.152964 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:49:02.152974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:02.152989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:02.153021 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:49:02.153031 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153057 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:49:02.153072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:02.153083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153104 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:49:02.153114 | orchestrator | 2026-02-28 00:49:02.153124 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-28 00:49:02.153133 | orchestrator | Saturday 28 February 2026 00:46:50 +0000 (0:00:01.332) 0:00:13.855 ***** 2026-02-28 00:49:02.153147 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:02.153158 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153168 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153184 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:49:02.153194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:02.153209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153230 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:49:02.153240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:02.153254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153275 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:49:02.153285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:02.153301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:02.153340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153361 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:49:02.153370 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:49:02.153385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:02.153395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153424 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:49:02.153434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:49:02.153449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.153470 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:49:02.153479 | orchestrator | 2026-02-28 00:49:02.153489 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-28 00:49:02.153499 | orchestrator | Saturday 28 February 2026 00:46:53 +0000 (0:00:02.451) 0:00:16.307 ***** 2026-02-28 00:49:02.153509 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:49:02.153518 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:49:02.153528 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:49:02.153538 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:49:02.153547 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:49:02.153557 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:49:02.153566 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:49:02.153576 | orchestrator | 2026-02-28 00:49:02.153586 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-28 00:49:02.153596 | orchestrator | Saturday 28 February 2026 00:46:54 +0000 (0:00:01.515) 0:00:17.822 ***** 2026-02-28 00:49:02.153627 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:49:02.153639 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:49:02.153648 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:49:02.153658 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:49:02.153668 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:49:02.153677 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:49:02.153687 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:49:02.153697 | orchestrator | 2026-02-28 00:49:02.153706 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-28 00:49:02.153724 | orchestrator | Saturday 28 February 2026 00:46:57 +0000 (0:00:02.149) 0:00:19.972 ***** 2026-02-28 00:49:02.153734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.153745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.153755 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.153766 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.153791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.153802 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.153812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.153833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.153843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.153854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.153864 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.153880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.153891 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.153901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.153925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.153935 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.153967 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.153978 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.153988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.154003 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.154075 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.154088 | orchestrator | 2026-02-28 00:49:02.154098 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-28 00:49:02.154109 | orchestrator | Saturday 28 February 2026 00:47:05 +0000 (0:00:08.536) 0:00:28.509 ***** 2026-02-28 00:49:02.154119 | orchestrator | [WARNING]: Skipped 2026-02-28 00:49:02.154130 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-28 00:49:02.154147 | orchestrator | to this access issue: 2026-02-28 00:49:02.154156 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-28 00:49:02.154166 | orchestrator | directory 2026-02-28 00:49:02.154176 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:49:02.154186 | orchestrator | 2026-02-28 00:49:02.154196 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-28 00:49:02.154206 | orchestrator | Saturday 28 February 2026 00:47:07 +0000 (0:00:01.808) 0:00:30.317 ***** 2026-02-28 00:49:02.154216 | orchestrator | [WARNING]: Skipped 2026-02-28 00:49:02.154226 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-28 00:49:02.154235 | orchestrator | to this access issue: 2026-02-28 00:49:02.154245 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-28 00:49:02.154255 | orchestrator | directory 2026-02-28 00:49:02.154265 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:49:02.154274 | orchestrator | 2026-02-28 00:49:02.154289 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-28 00:49:02.154299 | orchestrator | Saturday 28 February 2026 00:47:08 +0000 (0:00:00.800) 0:00:31.117 ***** 2026-02-28 00:49:02.154309 | orchestrator | [WARNING]: Skipped 2026-02-28 00:49:02.154319 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-28 00:49:02.154329 | orchestrator | to this access issue: 2026-02-28 00:49:02.154339 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-28 00:49:02.154348 | orchestrator | directory 2026-02-28 00:49:02.154358 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:49:02.154368 | orchestrator | 2026-02-28 00:49:02.154378 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-28 00:49:02.154388 | orchestrator | Saturday 28 February 2026 00:47:09 +0000 (0:00:01.105) 0:00:32.223 ***** 2026-02-28 00:49:02.154398 | orchestrator | [WARNING]: Skipped 2026-02-28 00:49:02.154408 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-28 00:49:02.154418 | orchestrator | to this access issue: 2026-02-28 00:49:02.154428 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-28 00:49:02.154462 | orchestrator | directory 2026-02-28 00:49:02.154473 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:49:02.154483 | orchestrator | 2026-02-28 00:49:02.154493 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-28 00:49:02.154503 | orchestrator | Saturday 28 February 2026 00:47:10 +0000 (0:00:00.814) 0:00:33.038 ***** 2026-02-28 00:49:02.154512 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:02.154522 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:02.154532 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:02.154542 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:49:02.154551 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:02.154561 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:49:02.154571 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:49:02.154580 | orchestrator | 2026-02-28 00:49:02.154590 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-28 00:49:02.154600 | orchestrator | Saturday 28 February 2026 00:47:14 +0000 (0:00:04.285) 0:00:37.323 ***** 2026-02-28 00:49:02.154610 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:49:02.154620 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:49:02.154630 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:49:02.154640 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:49:02.154649 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:49:02.154666 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:49:02.154675 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:49:02.154685 | orchestrator | 2026-02-28 00:49:02.154695 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-28 00:49:02.154705 | orchestrator | Saturday 28 February 2026 00:47:17 +0000 (0:00:03.466) 0:00:40.790 ***** 2026-02-28 00:49:02.154715 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:02.154725 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:02.154735 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:02.154744 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:02.154760 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:49:02.154770 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:49:02.154780 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:49:02.154789 | orchestrator | 2026-02-28 00:49:02.154799 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-28 00:49:02.154809 | orchestrator | Saturday 28 February 2026 00:47:21 +0000 (0:00:03.853) 0:00:44.643 ***** 2026-02-28 00:49:02.154820 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.154830 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.154846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.154856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.154866 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.154894 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.154905 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.154922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.154932 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.154973 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.154985 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.154995 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.155006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.155022 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.155042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.155053 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155063 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.155078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:49:02.155089 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155099 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155115 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155125 | orchestrator | 2026-02-28 00:49:02.155135 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-28 00:49:02.155145 | orchestrator | Saturday 28 February 2026 00:47:24 +0000 (0:00:03.024) 0:00:47.668 ***** 2026-02-28 00:49:02.155160 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:49:02.155177 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:49:02.155192 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:49:02.155236 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:49:02.155254 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:49:02.155268 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:49:02.155285 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:49:02.155300 | orchestrator | 2026-02-28 00:49:02.155327 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-28 00:49:02.155343 | orchestrator | Saturday 28 February 2026 00:47:28 +0000 (0:00:03.364) 0:00:51.033 ***** 2026-02-28 00:49:02.155359 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:49:02.155376 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:49:02.155392 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:49:02.155407 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:49:02.155425 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:49:02.155440 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:49:02.155457 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:49:02.155474 | orchestrator | 2026-02-28 00:49:02.155490 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-28 00:49:02.155504 | orchestrator | Saturday 28 February 2026 00:47:31 +0000 (0:00:03.490) 0:00:54.523 ***** 2026-02-28 00:49:02.155515 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.155532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.155552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.155562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.155573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.155590 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.155600 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:49:02.155625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155667 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155677 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155711 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155721 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155768 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155788 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:49:02.155808 | orchestrator | 2026-02-28 00:49:02.155819 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-28 00:49:02.155829 | orchestrator | Saturday 28 February 2026 00:47:35 +0000 (0:00:03.628) 0:00:58.152 ***** 2026-02-28 00:49:02.155844 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:02.155854 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:02.155864 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:02.155874 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:02.155884 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:49:02.155893 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:49:02.155903 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:49:02.155913 | orchestrator | 2026-02-28 00:49:02.155923 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-28 00:49:02.155933 | orchestrator | Saturday 28 February 2026 00:47:37 +0000 (0:00:02.464) 0:01:00.617 ***** 2026-02-28 00:49:02.155942 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:02.155973 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:02.155983 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:02.155993 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:02.156003 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:49:02.156034 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:49:02.156044 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:49:02.156054 | orchestrator | 2026-02-28 00:49:02.156064 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:49:02.156074 | orchestrator | Saturday 28 February 2026 00:47:39 +0000 (0:00:01.695) 0:01:02.312 ***** 2026-02-28 00:49:02.156084 | orchestrator | 2026-02-28 00:49:02.156093 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:49:02.156103 | orchestrator | Saturday 28 February 2026 00:47:39 +0000 (0:00:00.074) 0:01:02.386 ***** 2026-02-28 00:49:02.156113 | orchestrator | 2026-02-28 00:49:02.156123 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:49:02.156132 | orchestrator | Saturday 28 February 2026 00:47:39 +0000 (0:00:00.081) 0:01:02.468 ***** 2026-02-28 00:49:02.156142 | orchestrator | 2026-02-28 00:49:02.156152 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:49:02.156162 | orchestrator | Saturday 28 February 2026 00:47:39 +0000 (0:00:00.261) 0:01:02.729 ***** 2026-02-28 00:49:02.156172 | orchestrator | 2026-02-28 00:49:02.156181 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:49:02.156191 | orchestrator | Saturday 28 February 2026 00:47:39 +0000 (0:00:00.068) 0:01:02.797 ***** 2026-02-28 00:49:02.156201 | orchestrator | 2026-02-28 00:49:02.156216 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:49:02.156226 | orchestrator | Saturday 28 February 2026 00:47:40 +0000 (0:00:00.067) 0:01:02.865 ***** 2026-02-28 00:49:02.156235 | orchestrator | 2026-02-28 00:49:02.156245 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:49:02.156255 | orchestrator | Saturday 28 February 2026 00:47:40 +0000 (0:00:00.066) 0:01:02.931 ***** 2026-02-28 00:49:02.156265 | orchestrator | 2026-02-28 00:49:02.156274 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-28 00:49:02.156284 | orchestrator | Saturday 28 February 2026 00:47:40 +0000 (0:00:00.089) 0:01:03.021 ***** 2026-02-28 00:49:02.156294 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:02.156304 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:49:02.156313 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:02.156323 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:02.156333 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:49:02.156342 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:49:02.156352 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:02.156361 | orchestrator | 2026-02-28 00:49:02.156371 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-28 00:49:02.156381 | orchestrator | Saturday 28 February 2026 00:48:11 +0000 (0:00:31.797) 0:01:34.818 ***** 2026-02-28 00:49:02.156391 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:02.156400 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:49:02.156410 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:49:02.156420 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:49:02.156430 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:02.156439 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:02.156449 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:02.156459 | orchestrator | 2026-02-28 00:49:02.156468 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-28 00:49:02.156478 | orchestrator | Saturday 28 February 2026 00:48:44 +0000 (0:00:32.467) 0:02:07.285 ***** 2026-02-28 00:49:02.156488 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:49:02.156498 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:49:02.156508 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:49:02.156518 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:49:02.156527 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:49:02.156537 | orchestrator | ok: [testbed-manager] 2026-02-28 00:49:02.156547 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:49:02.156557 | orchestrator | 2026-02-28 00:49:02.156567 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-28 00:49:02.156583 | orchestrator | Saturday 28 February 2026 00:48:48 +0000 (0:00:03.754) 0:02:11.039 ***** 2026-02-28 00:49:02.156595 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:02.156612 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:02.156633 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:49:02.156653 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:49:02.156670 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:02.156685 | orchestrator | changed: [testbed-manager] 2026-02-28 00:49:02.156701 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:49:02.156717 | orchestrator | 2026-02-28 00:49:02.156733 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:49:02.156752 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:49:02.156787 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:49:02.156802 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:49:02.156822 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:49:02.156842 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:49:02.156852 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:49:02.156862 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:49:02.156872 | orchestrator | 2026-02-28 00:49:02.156882 | orchestrator | 2026-02-28 00:49:02.156892 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:49:02.156905 | orchestrator | Saturday 28 February 2026 00:49:00 +0000 (0:00:11.862) 0:02:22.902 ***** 2026-02-28 00:49:02.156921 | orchestrator | =============================================================================== 2026-02-28 00:49:02.156937 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.47s 2026-02-28 00:49:02.157019 | orchestrator | common : Restart fluentd container ------------------------------------- 31.80s 2026-02-28 00:49:02.157038 | orchestrator | common : Restart cron container ---------------------------------------- 11.86s 2026-02-28 00:49:02.157056 | orchestrator | common : Copying over config.json files for services -------------------- 8.54s 2026-02-28 00:49:02.157072 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.11s 2026-02-28 00:49:02.157088 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.29s 2026-02-28 00:49:02.157105 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.27s 2026-02-28 00:49:02.157120 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.85s 2026-02-28 00:49:02.157145 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.75s 2026-02-28 00:49:02.157162 | orchestrator | common : Check common containers ---------------------------------------- 3.63s 2026-02-28 00:49:02.157179 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.49s 2026-02-28 00:49:02.157196 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.47s 2026-02-28 00:49:02.157213 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.37s 2026-02-28 00:49:02.157230 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.02s 2026-02-28 00:49:02.157246 | orchestrator | common : Creating log volume -------------------------------------------- 2.46s 2026-02-28 00:49:02.157276 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.45s 2026-02-28 00:49:02.157293 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.15s 2026-02-28 00:49:02.157309 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.81s 2026-02-28 00:49:02.157327 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.70s 2026-02-28 00:49:02.157342 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.52s 2026-02-28 00:49:02.157357 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task f11eed86-9cdd-41e7-b09b-f76691239b33 is in state STARTED 2026-02-28 00:49:02.157373 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task e693b609-7586-4387-9316-64f9c082ba36 is in state STARTED 2026-02-28 00:49:02.157606 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:02.157629 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:02.157642 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:02.157656 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:02.157670 | orchestrator | 2026-02-28 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:05.194196 | orchestrator | 2026-02-28 00:49:05 | INFO  | Task f11eed86-9cdd-41e7-b09b-f76691239b33 is in state STARTED 2026-02-28 00:49:05.195115 | orchestrator | 2026-02-28 00:49:05 | INFO  | Task e693b609-7586-4387-9316-64f9c082ba36 is in state STARTED 2026-02-28 00:49:05.196160 | orchestrator | 2026-02-28 00:49:05 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:05.202599 | orchestrator | 2026-02-28 00:49:05 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:05.208015 | orchestrator | 2026-02-28 00:49:05 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:05.210369 | orchestrator | 2026-02-28 00:49:05 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:05.210426 | orchestrator | 2026-02-28 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:08.243462 | orchestrator | 2026-02-28 00:49:08 | INFO  | Task f11eed86-9cdd-41e7-b09b-f76691239b33 is in state STARTED 2026-02-28 00:49:08.244563 | orchestrator | 2026-02-28 00:49:08 | INFO  | Task e693b609-7586-4387-9316-64f9c082ba36 is in state STARTED 2026-02-28 00:49:08.245524 | orchestrator | 2026-02-28 00:49:08 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:08.246406 | orchestrator | 2026-02-28 00:49:08 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:08.247323 | orchestrator | 2026-02-28 00:49:08 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:08.248229 | orchestrator | 2026-02-28 00:49:08 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:08.248264 | orchestrator | 2026-02-28 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:11.290904 | orchestrator | 2026-02-28 00:49:11 | INFO  | Task f11eed86-9cdd-41e7-b09b-f76691239b33 is in state STARTED 2026-02-28 00:49:11.291774 | orchestrator | 2026-02-28 00:49:11 | INFO  | Task e693b609-7586-4387-9316-64f9c082ba36 is in state STARTED 2026-02-28 00:49:11.292732 | orchestrator | 2026-02-28 00:49:11 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:11.295216 | orchestrator | 2026-02-28 00:49:11 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:11.296196 | orchestrator | 2026-02-28 00:49:11 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:11.298941 | orchestrator | 2026-02-28 00:49:11 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:11.300128 | orchestrator | 2026-02-28 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:14.374709 | orchestrator | 2026-02-28 00:49:14 | INFO  | Task f11eed86-9cdd-41e7-b09b-f76691239b33 is in state STARTED 2026-02-28 00:49:14.377860 | orchestrator | 2026-02-28 00:49:14 | INFO  | Task e693b609-7586-4387-9316-64f9c082ba36 is in state STARTED 2026-02-28 00:49:14.380692 | orchestrator | 2026-02-28 00:49:14 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:14.381852 | orchestrator | 2026-02-28 00:49:14 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:14.382611 | orchestrator | 2026-02-28 00:49:14 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:14.385493 | orchestrator | 2026-02-28 00:49:14 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:14.385519 | orchestrator | 2026-02-28 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:17.419840 | orchestrator | 2026-02-28 00:49:17 | INFO  | Task f11eed86-9cdd-41e7-b09b-f76691239b33 is in state STARTED 2026-02-28 00:49:17.420468 | orchestrator | 2026-02-28 00:49:17 | INFO  | Task e693b609-7586-4387-9316-64f9c082ba36 is in state STARTED 2026-02-28 00:49:17.421823 | orchestrator | 2026-02-28 00:49:17 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:17.423345 | orchestrator | 2026-02-28 00:49:17 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:17.425782 | orchestrator | 2026-02-28 00:49:17 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:17.428265 | orchestrator | 2026-02-28 00:49:17 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:17.428307 | orchestrator | 2026-02-28 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:20.470868 | orchestrator | 2026-02-28 00:49:20 | INFO  | Task f11eed86-9cdd-41e7-b09b-f76691239b33 is in state STARTED 2026-02-28 00:49:20.471107 | orchestrator | 2026-02-28 00:49:20 | INFO  | Task e693b609-7586-4387-9316-64f9c082ba36 is in state STARTED 2026-02-28 00:49:20.471958 | orchestrator | 2026-02-28 00:49:20 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:20.475524 | orchestrator | 2026-02-28 00:49:20 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:20.476234 | orchestrator | 2026-02-28 00:49:20 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:20.477588 | orchestrator | 2026-02-28 00:49:20 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:20.477634 | orchestrator | 2026-02-28 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:23.534573 | orchestrator | 2026-02-28 00:49:23 | INFO  | Task f11eed86-9cdd-41e7-b09b-f76691239b33 is in state SUCCESS 2026-02-28 00:49:23.535891 | orchestrator | 2026-02-28 00:49:23 | INFO  | Task e693b609-7586-4387-9316-64f9c082ba36 is in state STARTED 2026-02-28 00:49:23.538116 | orchestrator | 2026-02-28 00:49:23 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:23.540203 | orchestrator | 2026-02-28 00:49:23 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:23.542729 | orchestrator | 2026-02-28 00:49:23 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:23.543608 | orchestrator | 2026-02-28 00:49:23 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:49:23.545582 | orchestrator | 2026-02-28 00:49:23 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:23.545629 | orchestrator | 2026-02-28 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:26.583350 | orchestrator | 2026-02-28 00:49:26 | INFO  | Task e693b609-7586-4387-9316-64f9c082ba36 is in state STARTED 2026-02-28 00:49:26.583464 | orchestrator | 2026-02-28 00:49:26 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:26.583476 | orchestrator | 2026-02-28 00:49:26 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:26.583484 | orchestrator | 2026-02-28 00:49:26 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:26.583499 | orchestrator | 2026-02-28 00:49:26 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:49:26.587347 | orchestrator | 2026-02-28 00:49:26 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:26.587399 | orchestrator | 2026-02-28 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:29.649448 | orchestrator | 2026-02-28 00:49:29 | INFO  | Task e693b609-7586-4387-9316-64f9c082ba36 is in state STARTED 2026-02-28 00:49:29.649490 | orchestrator | 2026-02-28 00:49:29 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:29.649494 | orchestrator | 2026-02-28 00:49:29 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:29.649498 | orchestrator | 2026-02-28 00:49:29 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:29.649502 | orchestrator | 2026-02-28 00:49:29 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:49:29.649506 | orchestrator | 2026-02-28 00:49:29 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:29.649510 | orchestrator | 2026-02-28 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:32.772724 | orchestrator | 2026-02-28 00:49:32 | INFO  | Task e693b609-7586-4387-9316-64f9c082ba36 is in state STARTED 2026-02-28 00:49:32.778326 | orchestrator | 2026-02-28 00:49:32 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:32.784325 | orchestrator | 2026-02-28 00:49:32 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:32.789985 | orchestrator | 2026-02-28 00:49:32 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:32.792938 | orchestrator | 2026-02-28 00:49:32 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:49:32.798811 | orchestrator | 2026-02-28 00:49:32 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:32.798866 | orchestrator | 2026-02-28 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:35.926366 | orchestrator | 2026-02-28 00:49:35.926442 | orchestrator | 2026-02-28 00:49:35.926456 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:49:35.926466 | orchestrator | 2026-02-28 00:49:35.926476 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:49:35.926486 | orchestrator | Saturday 28 February 2026 00:49:05 +0000 (0:00:00.379) 0:00:00.379 ***** 2026-02-28 00:49:35.926508 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:49:35.926518 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:49:35.926527 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:49:35.926536 | orchestrator | 2026-02-28 00:49:35.926544 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:49:35.926553 | orchestrator | Saturday 28 February 2026 00:49:06 +0000 (0:00:00.589) 0:00:00.969 ***** 2026-02-28 00:49:35.926565 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-28 00:49:35.926579 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-28 00:49:35.926600 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-28 00:49:35.926615 | orchestrator | 2026-02-28 00:49:35.926628 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-28 00:49:35.926642 | orchestrator | 2026-02-28 00:49:35.926655 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-28 00:49:35.926687 | orchestrator | Saturday 28 February 2026 00:49:07 +0000 (0:00:00.795) 0:00:01.764 ***** 2026-02-28 00:49:35.926702 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:49:35.926717 | orchestrator | 2026-02-28 00:49:35.926730 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-28 00:49:35.926744 | orchestrator | Saturday 28 February 2026 00:49:08 +0000 (0:00:00.973) 0:00:02.738 ***** 2026-02-28 00:49:35.926759 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-28 00:49:35.926774 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-28 00:49:35.926840 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-28 00:49:35.926850 | orchestrator | 2026-02-28 00:49:35.926859 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-28 00:49:35.926869 | orchestrator | Saturday 28 February 2026 00:49:09 +0000 (0:00:01.109) 0:00:03.847 ***** 2026-02-28 00:49:35.926878 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-28 00:49:35.926890 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-28 00:49:35.926903 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-28 00:49:35.926924 | orchestrator | 2026-02-28 00:49:35.926940 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-28 00:49:35.926954 | orchestrator | Saturday 28 February 2026 00:49:12 +0000 (0:00:02.752) 0:00:06.600 ***** 2026-02-28 00:49:35.926967 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:35.926981 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:35.927014 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:35.927029 | orchestrator | 2026-02-28 00:49:35.927044 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-28 00:49:35.927057 | orchestrator | Saturday 28 February 2026 00:49:15 +0000 (0:00:03.020) 0:00:09.621 ***** 2026-02-28 00:49:35.927070 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:35.927083 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:35.927094 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:35.927106 | orchestrator | 2026-02-28 00:49:35.927129 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:49:35.927145 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:49:35.927162 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:49:35.927176 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:49:35.927188 | orchestrator | 2026-02-28 00:49:35.927197 | orchestrator | 2026-02-28 00:49:35.927207 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:49:35.927216 | orchestrator | Saturday 28 February 2026 00:49:18 +0000 (0:00:03.649) 0:00:13.271 ***** 2026-02-28 00:49:35.927234 | orchestrator | =============================================================================== 2026-02-28 00:49:35.927243 | orchestrator | memcached : Restart memcached container --------------------------------- 3.65s 2026-02-28 00:49:35.927252 | orchestrator | memcached : Check memcached container ----------------------------------- 3.02s 2026-02-28 00:49:35.927261 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.75s 2026-02-28 00:49:35.927270 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.11s 2026-02-28 00:49:35.927293 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.97s 2026-02-28 00:49:35.927309 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2026-02-28 00:49:35.927318 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.59s 2026-02-28 00:49:35.927327 | orchestrator | 2026-02-28 00:49:35.927336 | orchestrator | 2026-02-28 00:49:35.927344 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:49:35.927354 | orchestrator | 2026-02-28 00:49:35.927363 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:49:35.927372 | orchestrator | Saturday 28 February 2026 00:49:05 +0000 (0:00:00.344) 0:00:00.344 ***** 2026-02-28 00:49:35.927381 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:49:35.927390 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:49:35.927399 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:49:35.927408 | orchestrator | 2026-02-28 00:49:35.927417 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:49:35.927442 | orchestrator | Saturday 28 February 2026 00:49:06 +0000 (0:00:00.524) 0:00:00.868 ***** 2026-02-28 00:49:35.927452 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-28 00:49:35.927461 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-28 00:49:35.927470 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-28 00:49:35.927479 | orchestrator | 2026-02-28 00:49:35.927488 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-28 00:49:35.927497 | orchestrator | 2026-02-28 00:49:35.927506 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-28 00:49:35.927515 | orchestrator | Saturday 28 February 2026 00:49:07 +0000 (0:00:00.818) 0:00:01.687 ***** 2026-02-28 00:49:35.927528 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:49:35.927543 | orchestrator | 2026-02-28 00:49:35.927555 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-28 00:49:35.927568 | orchestrator | Saturday 28 February 2026 00:49:08 +0000 (0:00:00.832) 0:00:02.519 ***** 2026-02-28 00:49:35.927586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927688 | orchestrator | 2026-02-28 00:49:35.927698 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-28 00:49:35.927710 | orchestrator | Saturday 28 February 2026 00:49:09 +0000 (0:00:01.658) 0:00:04.178 ***** 2026-02-28 00:49:35.927719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927821 | orchestrator | 2026-02-28 00:49:35.927833 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-28 00:49:35.927846 | orchestrator | Saturday 28 February 2026 00:49:13 +0000 (0:00:03.632) 0:00:07.811 ***** 2026-02-28 00:49:35.927859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.927975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.928028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.928045 | orchestrator | 2026-02-28 00:49:35.928059 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-28 00:49:35.928073 | orchestrator | Saturday 28 February 2026 00:49:16 +0000 (0:00:03.574) 0:00:11.385 ***** 2026-02-28 00:49:35.928088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.928103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.928127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.928139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.928148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.928169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:35.928178 | orchestrator | 2026-02-28 00:49:35.928186 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-28 00:49:35.928194 | orchestrator | Saturday 28 February 2026 00:49:18 +0000 (0:00:01.878) 0:00:13.264 ***** 2026-02-28 00:49:35.928202 | orchestrator | 2026-02-28 00:49:35.928210 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-28 00:49:35.928219 | orchestrator | Saturday 28 February 2026 00:49:19 +0000 (0:00:00.302) 0:00:13.567 ***** 2026-02-28 00:49:35.928226 | orchestrator | 2026-02-28 00:49:35.928234 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-28 00:49:35.928242 | orchestrator | Saturday 28 February 2026 00:49:19 +0000 (0:00:00.539) 0:00:14.106 ***** 2026-02-28 00:49:35.928250 | orchestrator | 2026-02-28 00:49:35.928258 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-28 00:49:35.928271 | orchestrator | Saturday 28 February 2026 00:49:19 +0000 (0:00:00.224) 0:00:14.331 ***** 2026-02-28 00:49:35.928281 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:35.928295 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:35.928308 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:35.928321 | orchestrator | 2026-02-28 00:49:35.928335 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-28 00:49:35.928348 | orchestrator | Saturday 28 February 2026 00:49:26 +0000 (0:00:06.372) 0:00:20.703 ***** 2026-02-28 00:49:35.928362 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:35.928376 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:35.928390 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:35.928403 | orchestrator | 2026-02-28 00:49:35.928417 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:49:35.928429 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:49:35.928438 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:49:35.928446 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:49:35.928454 | orchestrator | 2026-02-28 00:49:35.928462 | orchestrator | 2026-02-28 00:49:35.928469 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:49:35.928478 | orchestrator | Saturday 28 February 2026 00:49:33 +0000 (0:00:07.395) 0:00:28.099 ***** 2026-02-28 00:49:35.928486 | orchestrator | =============================================================================== 2026-02-28 00:49:35.928494 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.40s 2026-02-28 00:49:35.928502 | orchestrator | redis : Restart redis container ----------------------------------------- 6.37s 2026-02-28 00:49:35.928510 | orchestrator | redis : Copying over default config.json files -------------------------- 3.63s 2026-02-28 00:49:35.928517 | orchestrator | redis : Copying over redis config files --------------------------------- 3.57s 2026-02-28 00:49:35.928530 | orchestrator | redis : Check redis containers ------------------------------------------ 1.88s 2026-02-28 00:49:35.928538 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.66s 2026-02-28 00:49:35.928546 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.07s 2026-02-28 00:49:35.928553 | orchestrator | redis : include_tasks --------------------------------------------------- 0.83s 2026-02-28 00:49:35.928561 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2026-02-28 00:49:35.928569 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.52s 2026-02-28 00:49:35.928577 | orchestrator | 2026-02-28 00:49:35 | INFO  | Task e693b609-7586-4387-9316-64f9c082ba36 is in state SUCCESS 2026-02-28 00:49:35.928585 | orchestrator | 2026-02-28 00:49:35 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:35.928593 | orchestrator | 2026-02-28 00:49:35 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:35.928733 | orchestrator | 2026-02-28 00:49:35 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:35.928750 | orchestrator | 2026-02-28 00:49:35 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:49:35.929430 | orchestrator | 2026-02-28 00:49:35 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:35.930058 | orchestrator | 2026-02-28 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:39.025950 | orchestrator | 2026-02-28 00:49:39 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:39.029065 | orchestrator | 2026-02-28 00:49:39 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:39.051258 | orchestrator | 2026-02-28 00:49:39 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:39.057151 | orchestrator | 2026-02-28 00:49:39 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:49:39.057981 | orchestrator | 2026-02-28 00:49:39 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:39.058065 | orchestrator | 2026-02-28 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:42.101780 | orchestrator | 2026-02-28 00:49:42 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:42.104324 | orchestrator | 2026-02-28 00:49:42 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:42.106969 | orchestrator | 2026-02-28 00:49:42 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:42.109749 | orchestrator | 2026-02-28 00:49:42 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:49:42.111113 | orchestrator | 2026-02-28 00:49:42 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:42.112189 | orchestrator | 2026-02-28 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:45.192946 | orchestrator | 2026-02-28 00:49:45 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:45.193066 | orchestrator | 2026-02-28 00:49:45 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:45.193074 | orchestrator | 2026-02-28 00:49:45 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:45.193078 | orchestrator | 2026-02-28 00:49:45 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:49:45.193082 | orchestrator | 2026-02-28 00:49:45 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:45.193087 | orchestrator | 2026-02-28 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:48.232869 | orchestrator | 2026-02-28 00:49:48 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:48.234974 | orchestrator | 2026-02-28 00:49:48 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:48.235232 | orchestrator | 2026-02-28 00:49:48 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:48.236306 | orchestrator | 2026-02-28 00:49:48 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:49:48.238095 | orchestrator | 2026-02-28 00:49:48 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:48.238146 | orchestrator | 2026-02-28 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:51.311121 | orchestrator | 2026-02-28 00:49:51 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:51.311571 | orchestrator | 2026-02-28 00:49:51 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:51.312551 | orchestrator | 2026-02-28 00:49:51 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:51.313251 | orchestrator | 2026-02-28 00:49:51 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:49:51.315842 | orchestrator | 2026-02-28 00:49:51 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:51.315914 | orchestrator | 2026-02-28 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:54.436530 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:54.437108 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:54.440872 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:54.440977 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:49:54.441005 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:54.441058 | orchestrator | 2026-02-28 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:57.472320 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:49:57.472633 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:49:57.473557 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:49:57.474300 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:49:57.475263 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:49:57.475294 | orchestrator | 2026-02-28 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:00.594240 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:00.594353 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:00.596663 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:00.598097 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:00.602802 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:50:00.602859 | orchestrator | 2026-02-28 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:03.850128 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:03.851402 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:03.852208 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:03.855577 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:03.856222 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:50:03.856269 | orchestrator | 2026-02-28 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:06.896796 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:06.897572 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:06.900206 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:06.901556 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:06.902856 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:50:06.902893 | orchestrator | 2026-02-28 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:09.942299 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:09.942854 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:09.943686 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:09.945181 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:09.946004 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:50:09.946145 | orchestrator | 2026-02-28 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:13.003269 | orchestrator | 2026-02-28 00:50:13 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:13.003370 | orchestrator | 2026-02-28 00:50:13 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:13.014139 | orchestrator | 2026-02-28 00:50:13 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:13.014212 | orchestrator | 2026-02-28 00:50:13 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:13.014219 | orchestrator | 2026-02-28 00:50:13 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:50:13.014225 | orchestrator | 2026-02-28 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:16.087904 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:16.088887 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:16.090347 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:16.092843 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:16.093854 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:50:16.093887 | orchestrator | 2026-02-28 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:19.134608 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:19.134952 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:19.135716 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:19.136308 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:19.137042 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:50:19.137138 | orchestrator | 2026-02-28 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:22.173505 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:22.174422 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:22.175470 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:22.176950 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:22.178343 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state STARTED 2026-02-28 00:50:22.178397 | orchestrator | 2026-02-28 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:25.229930 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:25.231506 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:50:25.233010 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:25.234882 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:25.236360 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:25.238629 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task 30e0fe87-adb1-444d-b83d-4ff20edaced3 is in state SUCCESS 2026-02-28 00:50:25.238740 | orchestrator | 2026-02-28 00:50:25.240972 | orchestrator | 2026-02-28 00:50:25.241003 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:50:25.241011 | orchestrator | 2026-02-28 00:50:25.241019 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:50:25.241027 | orchestrator | Saturday 28 February 2026 00:49:06 +0000 (0:00:00.518) 0:00:00.518 ***** 2026-02-28 00:50:25.241032 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:50:25.241037 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:50:25.241042 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:50:25.241049 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:50:25.241055 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:50:25.241061 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:50:25.241067 | orchestrator | 2026-02-28 00:50:25.241074 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:50:25.241080 | orchestrator | Saturday 28 February 2026 00:49:07 +0000 (0:00:01.234) 0:00:01.752 ***** 2026-02-28 00:50:25.241103 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:50:25.241110 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:50:25.241117 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:50:25.241124 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:50:25.241130 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:50:25.241137 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:50:25.241143 | orchestrator | 2026-02-28 00:50:25.241150 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-28 00:50:25.241157 | orchestrator | 2026-02-28 00:50:25.241163 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-28 00:50:25.241170 | orchestrator | Saturday 28 February 2026 00:49:08 +0000 (0:00:01.082) 0:00:02.835 ***** 2026-02-28 00:50:25.241178 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:50:25.241188 | orchestrator | 2026-02-28 00:50:25.241194 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-28 00:50:25.241201 | orchestrator | Saturday 28 February 2026 00:49:10 +0000 (0:00:02.187) 0:00:05.022 ***** 2026-02-28 00:50:25.241208 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-28 00:50:25.241233 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-28 00:50:25.241240 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-28 00:50:25.241246 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-28 00:50:25.241254 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-28 00:50:25.241258 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-28 00:50:25.241262 | orchestrator | 2026-02-28 00:50:25.241266 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-28 00:50:25.241270 | orchestrator | Saturday 28 February 2026 00:49:13 +0000 (0:00:02.273) 0:00:07.295 ***** 2026-02-28 00:50:25.241274 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-28 00:50:25.241278 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-28 00:50:25.241282 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-28 00:50:25.241286 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-28 00:50:25.241289 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-28 00:50:25.241293 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-28 00:50:25.241297 | orchestrator | 2026-02-28 00:50:25.241301 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-28 00:50:25.241305 | orchestrator | Saturday 28 February 2026 00:49:16 +0000 (0:00:02.837) 0:00:10.132 ***** 2026-02-28 00:50:25.241309 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-28 00:50:25.241313 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:50:25.241318 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-28 00:50:25.241322 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:50:25.241326 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-28 00:50:25.241330 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:50:25.241334 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-28 00:50:25.241338 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:50:25.241342 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-28 00:50:25.241346 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:50:25.241350 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-28 00:50:25.241354 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:50:25.241357 | orchestrator | 2026-02-28 00:50:25.241361 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-28 00:50:25.241365 | orchestrator | Saturday 28 February 2026 00:49:17 +0000 (0:00:01.569) 0:00:11.701 ***** 2026-02-28 00:50:25.241369 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:50:25.241373 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:50:25.241377 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:50:25.241381 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:50:25.241385 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:50:25.241388 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:50:25.241392 | orchestrator | 2026-02-28 00:50:25.241396 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-28 00:50:25.241400 | orchestrator | Saturday 28 February 2026 00:49:18 +0000 (0:00:00.976) 0:00:12.677 ***** 2026-02-28 00:50:25.241421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241459 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241468 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241472 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241476 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241489 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241497 | orchestrator | 2026-02-28 00:50:25.241501 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-28 00:50:25.241505 | orchestrator | Saturday 28 February 2026 00:49:23 +0000 (0:00:04.829) 0:00:17.507 ***** 2026-02-28 00:50:25.241509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241543 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241555 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241577 | orchestrator | 2026-02-28 00:50:25.241582 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-28 00:50:25.241586 | orchestrator | Saturday 28 February 2026 00:49:27 +0000 (0:00:03.911) 0:00:21.418 ***** 2026-02-28 00:50:25.241591 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:50:25.241595 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:50:25.241600 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:50:25.241605 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:50:25.241609 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:50:25.241614 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:50:25.241618 | orchestrator | 2026-02-28 00:50:25.241623 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-28 00:50:25.241628 | orchestrator | Saturday 28 February 2026 00:49:29 +0000 (0:00:02.021) 0:00:23.440 ***** 2026-02-28 00:50:25.241632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241660 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241674 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241678 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241706 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241711 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:25.241715 | orchestrator | 2026-02-28 00:50:25.241720 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:50:25.241724 | orchestrator | Saturday 28 February 2026 00:49:32 +0000 (0:00:03.042) 0:00:26.483 ***** 2026-02-28 00:50:25.241728 | orchestrator | 2026-02-28 00:50:25.241732 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:50:25.241736 | orchestrator | Saturday 28 February 2026 00:49:32 +0000 (0:00:00.331) 0:00:26.814 ***** 2026-02-28 00:50:25.241740 | orchestrator | 2026-02-28 00:50:25.241744 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:50:25.241748 | orchestrator | Saturday 28 February 2026 00:49:32 +0000 (0:00:00.140) 0:00:26.954 ***** 2026-02-28 00:50:25.241752 | orchestrator | 2026-02-28 00:50:25.241755 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:50:25.241759 | orchestrator | Saturday 28 February 2026 00:49:33 +0000 (0:00:00.288) 0:00:27.242 ***** 2026-02-28 00:50:25.241763 | orchestrator | 2026-02-28 00:50:25.241767 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:50:25.241771 | orchestrator | Saturday 28 February 2026 00:49:33 +0000 (0:00:00.374) 0:00:27.616 ***** 2026-02-28 00:50:25.241775 | orchestrator | 2026-02-28 00:50:25.241779 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:50:25.241786 | orchestrator | Saturday 28 February 2026 00:49:33 +0000 (0:00:00.275) 0:00:27.892 ***** 2026-02-28 00:50:25.241790 | orchestrator | 2026-02-28 00:50:25.241794 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-28 00:50:25.241798 | orchestrator | Saturday 28 February 2026 00:49:33 +0000 (0:00:00.177) 0:00:28.070 ***** 2026-02-28 00:50:25.241802 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:50:25.241806 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:50:25.241810 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:50:25.241814 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:50:25.241818 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:50:25.241822 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:50:25.241825 | orchestrator | 2026-02-28 00:50:25.241830 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-28 00:50:25.241834 | orchestrator | Saturday 28 February 2026 00:49:43 +0000 (0:00:09.491) 0:00:37.561 ***** 2026-02-28 00:50:25.241838 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:50:25.241841 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:50:25.241845 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:50:25.241849 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:50:25.241853 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:50:25.241857 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:50:25.241861 | orchestrator | 2026-02-28 00:50:25.241865 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-28 00:50:25.241869 | orchestrator | Saturday 28 February 2026 00:49:45 +0000 (0:00:01.551) 0:00:39.112 ***** 2026-02-28 00:50:25.241873 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:50:25.241877 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:50:25.241881 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:50:25.241884 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:50:25.241888 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:50:25.241892 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:50:25.241896 | orchestrator | 2026-02-28 00:50:25.241903 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-28 00:50:25.241907 | orchestrator | Saturday 28 February 2026 00:49:56 +0000 (0:00:11.880) 0:00:50.993 ***** 2026-02-28 00:50:25.241913 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-28 00:50:25.241917 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-28 00:50:25.241921 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-28 00:50:25.241925 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-28 00:50:25.241929 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-28 00:50:25.241933 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-28 00:50:25.241937 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-28 00:50:25.241941 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-28 00:50:25.241945 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-28 00:50:25.241949 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-28 00:50:25.241953 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-28 00:50:25.241957 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-28 00:50:25.241961 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:50:25.241968 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:50:25.241972 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:50:25.241975 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:50:25.241979 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:50:25.241983 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:50:25.241987 | orchestrator | 2026-02-28 00:50:25.241991 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-28 00:50:25.241995 | orchestrator | Saturday 28 February 2026 00:50:06 +0000 (0:00:09.316) 0:01:00.310 ***** 2026-02-28 00:50:25.241999 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-28 00:50:25.242003 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:50:25.242007 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-28 00:50:25.242011 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:50:25.242060 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-28 00:50:25.242067 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:50:25.242071 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-28 00:50:25.242075 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-28 00:50:25.242079 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-28 00:50:25.242083 | orchestrator | 2026-02-28 00:50:25.242175 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-28 00:50:25.242187 | orchestrator | Saturday 28 February 2026 00:50:09 +0000 (0:00:03.257) 0:01:03.567 ***** 2026-02-28 00:50:25.242191 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-28 00:50:25.242195 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:50:25.242199 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-28 00:50:25.242203 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:50:25.242207 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-28 00:50:25.242211 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:50:25.242215 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-28 00:50:25.242219 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-28 00:50:25.242223 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-28 00:50:25.242227 | orchestrator | 2026-02-28 00:50:25.242231 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-28 00:50:25.242235 | orchestrator | Saturday 28 February 2026 00:50:13 +0000 (0:00:04.376) 0:01:07.944 ***** 2026-02-28 00:50:25.242238 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:50:25.242242 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:50:25.242246 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:50:25.242250 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:50:25.242254 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:50:25.242258 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:50:25.242262 | orchestrator | 2026-02-28 00:50:25.242266 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:50:25.242274 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:50:25.242285 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:50:25.242290 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:50:25.242300 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 00:50:25.242304 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 00:50:25.242308 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 00:50:25.242312 | orchestrator | 2026-02-28 00:50:25.242316 | orchestrator | 2026-02-28 00:50:25.242320 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:50:25.242324 | orchestrator | Saturday 28 February 2026 00:50:22 +0000 (0:00:08.888) 0:01:16.833 ***** 2026-02-28 00:50:25.242328 | orchestrator | =============================================================================== 2026-02-28 00:50:25.242332 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.77s 2026-02-28 00:50:25.242336 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.49s 2026-02-28 00:50:25.242340 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 9.32s 2026-02-28 00:50:25.242344 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 4.83s 2026-02-28 00:50:25.242347 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.38s 2026-02-28 00:50:25.242351 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.91s 2026-02-28 00:50:25.242355 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.26s 2026-02-28 00:50:25.242359 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.04s 2026-02-28 00:50:25.242363 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.84s 2026-02-28 00:50:25.242367 | orchestrator | module-load : Load modules ---------------------------------------------- 2.27s 2026-02-28 00:50:25.242371 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.19s 2026-02-28 00:50:25.242375 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.02s 2026-02-28 00:50:25.242378 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.59s 2026-02-28 00:50:25.242382 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.57s 2026-02-28 00:50:25.242386 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.55s 2026-02-28 00:50:25.242390 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.23s 2026-02-28 00:50:25.242394 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.08s 2026-02-28 00:50:25.242398 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.98s 2026-02-28 00:50:25.242402 | orchestrator | 2026-02-28 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:28.286788 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:28.287621 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:50:28.288374 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:28.289341 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:28.290125 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:28.290530 | orchestrator | 2026-02-28 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:31.326161 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:31.326489 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:50:31.326957 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:31.327711 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:31.328278 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:31.328328 | orchestrator | 2026-02-28 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:34.356172 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:34.358390 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:50:34.362311 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:34.365301 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:34.367341 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:34.367536 | orchestrator | 2026-02-28 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:37.409762 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:37.412813 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:50:37.413490 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:37.414477 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:37.415402 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:37.415468 | orchestrator | 2026-02-28 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:40.450465 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:40.452681 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:50:40.453691 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:40.455335 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:40.456650 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:40.456703 | orchestrator | 2026-02-28 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:43.511309 | orchestrator | 2026-02-28 00:50:43 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:43.512522 | orchestrator | 2026-02-28 00:50:43 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:50:43.513466 | orchestrator | 2026-02-28 00:50:43 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:43.514201 | orchestrator | 2026-02-28 00:50:43 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:43.515579 | orchestrator | 2026-02-28 00:50:43 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:43.515679 | orchestrator | 2026-02-28 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:46.569859 | orchestrator | 2026-02-28 00:50:46 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:46.570289 | orchestrator | 2026-02-28 00:50:46 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:50:46.571220 | orchestrator | 2026-02-28 00:50:46 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:46.573179 | orchestrator | 2026-02-28 00:50:46 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:46.593162 | orchestrator | 2026-02-28 00:50:46 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:46.593246 | orchestrator | 2026-02-28 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:49.638621 | orchestrator | 2026-02-28 00:50:49 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:49.638789 | orchestrator | 2026-02-28 00:50:49 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:50:49.639385 | orchestrator | 2026-02-28 00:50:49 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:49.640253 | orchestrator | 2026-02-28 00:50:49 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:49.640858 | orchestrator | 2026-02-28 00:50:49 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:49.641030 | orchestrator | 2026-02-28 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:52.687716 | orchestrator | 2026-02-28 00:50:52 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:52.687795 | orchestrator | 2026-02-28 00:50:52 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:50:52.689858 | orchestrator | 2026-02-28 00:50:52 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:52.691439 | orchestrator | 2026-02-28 00:50:52 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:52.693004 | orchestrator | 2026-02-28 00:50:52 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:52.693053 | orchestrator | 2026-02-28 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:55.735432 | orchestrator | 2026-02-28 00:50:55 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:55.740745 | orchestrator | 2026-02-28 00:50:55 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:50:55.745172 | orchestrator | 2026-02-28 00:50:55 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:55.749639 | orchestrator | 2026-02-28 00:50:55 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:55.751960 | orchestrator | 2026-02-28 00:50:55 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:55.752029 | orchestrator | 2026-02-28 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:58.801069 | orchestrator | 2026-02-28 00:50:58 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:50:58.803066 | orchestrator | 2026-02-28 00:50:58 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:50:58.804833 | orchestrator | 2026-02-28 00:50:58 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:50:58.807501 | orchestrator | 2026-02-28 00:50:58 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:50:58.809177 | orchestrator | 2026-02-28 00:50:58 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:50:58.809367 | orchestrator | 2026-02-28 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:01.908817 | orchestrator | 2026-02-28 00:51:01 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:51:01.908900 | orchestrator | 2026-02-28 00:51:01 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:01.908908 | orchestrator | 2026-02-28 00:51:01 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:01.908913 | orchestrator | 2026-02-28 00:51:01 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:01.908917 | orchestrator | 2026-02-28 00:51:01 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:01.908922 | orchestrator | 2026-02-28 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:05.048458 | orchestrator | 2026-02-28 00:51:05 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:51:05.050629 | orchestrator | 2026-02-28 00:51:05 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:05.054683 | orchestrator | 2026-02-28 00:51:05 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:05.056057 | orchestrator | 2026-02-28 00:51:05 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:05.057995 | orchestrator | 2026-02-28 00:51:05 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:05.058060 | orchestrator | 2026-02-28 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:08.434961 | orchestrator | 2026-02-28 00:51:08 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:51:08.435033 | orchestrator | 2026-02-28 00:51:08 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:08.435040 | orchestrator | 2026-02-28 00:51:08 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:08.435046 | orchestrator | 2026-02-28 00:51:08 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:08.437382 | orchestrator | 2026-02-28 00:51:08 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:08.437443 | orchestrator | 2026-02-28 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:11.519079 | orchestrator | 2026-02-28 00:51:11 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:51:11.525569 | orchestrator | 2026-02-28 00:51:11 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:11.528076 | orchestrator | 2026-02-28 00:51:11 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:11.532735 | orchestrator | 2026-02-28 00:51:11 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:11.537888 | orchestrator | 2026-02-28 00:51:11 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:11.537969 | orchestrator | 2026-02-28 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:14.841250 | orchestrator | 2026-02-28 00:51:14 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:51:14.841371 | orchestrator | 2026-02-28 00:51:14 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:14.841383 | orchestrator | 2026-02-28 00:51:14 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:14.841390 | orchestrator | 2026-02-28 00:51:14 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:14.841397 | orchestrator | 2026-02-28 00:51:14 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:14.841404 | orchestrator | 2026-02-28 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:17.897099 | orchestrator | 2026-02-28 00:51:17 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:51:17.903893 | orchestrator | 2026-02-28 00:51:17 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:17.907606 | orchestrator | 2026-02-28 00:51:17 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:17.916603 | orchestrator | 2026-02-28 00:51:17 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:17.923447 | orchestrator | 2026-02-28 00:51:17 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:17.923523 | orchestrator | 2026-02-28 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:21.029351 | orchestrator | 2026-02-28 00:51:21 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:51:21.029431 | orchestrator | 2026-02-28 00:51:21 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:21.030392 | orchestrator | 2026-02-28 00:51:21 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:21.031381 | orchestrator | 2026-02-28 00:51:21 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:21.031966 | orchestrator | 2026-02-28 00:51:21 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:21.032322 | orchestrator | 2026-02-28 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:24.301356 | orchestrator | 2026-02-28 00:51:24 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state STARTED 2026-02-28 00:51:24.301446 | orchestrator | 2026-02-28 00:51:24 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:24.301459 | orchestrator | 2026-02-28 00:51:24 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:24.301469 | orchestrator | 2026-02-28 00:51:24 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:24.301478 | orchestrator | 2026-02-28 00:51:24 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:24.301484 | orchestrator | 2026-02-28 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:27.287504 | orchestrator | 2026-02-28 00:51:27 | INFO  | Task cf447e83-a74e-48e8-947f-c63ff1a2c7a7 is in state SUCCESS 2026-02-28 00:51:27.290451 | orchestrator | 2026-02-28 00:51:27.290530 | orchestrator | 2026-02-28 00:51:27.290543 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-28 00:51:27.290556 | orchestrator | 2026-02-28 00:51:27.290604 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-28 00:51:27.290617 | orchestrator | Saturday 28 February 2026 00:46:37 +0000 (0:00:00.161) 0:00:00.161 ***** 2026-02-28 00:51:27.290628 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:51:27.290640 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:51:27.290651 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:51:27.290662 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.290710 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.290721 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.290733 | orchestrator | 2026-02-28 00:51:27.290744 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-28 00:51:27.290756 | orchestrator | Saturday 28 February 2026 00:46:38 +0000 (0:00:00.725) 0:00:00.887 ***** 2026-02-28 00:51:27.290767 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.290780 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.290791 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.290802 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.290814 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.290855 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.290867 | orchestrator | 2026-02-28 00:51:27.290877 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-28 00:51:27.290887 | orchestrator | Saturday 28 February 2026 00:46:39 +0000 (0:00:00.581) 0:00:01.469 ***** 2026-02-28 00:51:27.290897 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.290908 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.290917 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.290927 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.290937 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.290947 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.290957 | orchestrator | 2026-02-28 00:51:27.290967 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-28 00:51:27.290977 | orchestrator | Saturday 28 February 2026 00:46:39 +0000 (0:00:00.642) 0:00:02.111 ***** 2026-02-28 00:51:27.290988 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:27.290998 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.291008 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:27.291018 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:27.291029 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:27.291039 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:27.291050 | orchestrator | 2026-02-28 00:51:27.291060 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-28 00:51:27.291071 | orchestrator | Saturday 28 February 2026 00:46:42 +0000 (0:00:02.534) 0:00:04.646 ***** 2026-02-28 00:51:27.291081 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:27.291092 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:27.291103 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:27.291113 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.291122 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:27.291132 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:27.291141 | orchestrator | 2026-02-28 00:51:27.291151 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-28 00:51:27.291160 | orchestrator | Saturday 28 February 2026 00:46:43 +0000 (0:00:01.026) 0:00:05.672 ***** 2026-02-28 00:51:27.291170 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:27.291181 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:27.291241 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:27.291252 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.291262 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:27.291273 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:27.291282 | orchestrator | 2026-02-28 00:51:27.291292 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-28 00:51:27.291302 | orchestrator | Saturday 28 February 2026 00:46:44 +0000 (0:00:00.865) 0:00:06.537 ***** 2026-02-28 00:51:27.291313 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.291323 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.291333 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.291342 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.291353 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.291363 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.291373 | orchestrator | 2026-02-28 00:51:27.291465 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-28 00:51:27.291478 | orchestrator | Saturday 28 February 2026 00:46:44 +0000 (0:00:00.635) 0:00:07.173 ***** 2026-02-28 00:51:27.291489 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.291499 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.291510 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.291521 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.291532 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.291543 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.291554 | orchestrator | 2026-02-28 00:51:27.291566 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-28 00:51:27.291577 | orchestrator | Saturday 28 February 2026 00:46:45 +0000 (0:00:00.658) 0:00:07.831 ***** 2026-02-28 00:51:27.291589 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:51:27.291601 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:51:27.291613 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.291624 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:51:27.291636 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:51:27.291647 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.291659 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:51:27.291670 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:51:27.291681 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.291693 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:51:27.291721 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:51:27.291732 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.291743 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:51:27.291754 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:51:27.291764 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.291773 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:51:27.291790 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:51:27.291799 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.291808 | orchestrator | 2026-02-28 00:51:27.291817 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-28 00:51:27.291826 | orchestrator | Saturday 28 February 2026 00:46:46 +0000 (0:00:00.745) 0:00:08.577 ***** 2026-02-28 00:51:27.291835 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.291844 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.291853 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.291862 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.291871 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.291880 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.291889 | orchestrator | 2026-02-28 00:51:27.291898 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-28 00:51:27.291909 | orchestrator | Saturday 28 February 2026 00:46:47 +0000 (0:00:01.328) 0:00:09.905 ***** 2026-02-28 00:51:27.291918 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:51:27.291927 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:51:27.291937 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:51:27.291946 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.291955 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.291963 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.291973 | orchestrator | 2026-02-28 00:51:27.291982 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-28 00:51:27.291991 | orchestrator | Saturday 28 February 2026 00:46:48 +0000 (0:00:00.892) 0:00:10.798 ***** 2026-02-28 00:51:27.292006 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:27.292016 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:27.292025 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:27.292034 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:27.292043 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.292052 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:27.292061 | orchestrator | 2026-02-28 00:51:27.292070 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-28 00:51:27.292079 | orchestrator | Saturday 28 February 2026 00:46:54 +0000 (0:00:05.653) 0:00:16.451 ***** 2026-02-28 00:51:27.292088 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.292097 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.292106 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.292115 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.292123 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.292131 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.292140 | orchestrator | 2026-02-28 00:51:27.292149 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-28 00:51:27.292158 | orchestrator | Saturday 28 February 2026 00:46:55 +0000 (0:00:01.761) 0:00:18.213 ***** 2026-02-28 00:51:27.292167 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.292176 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.292185 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.292209 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.292218 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.292227 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.292236 | orchestrator | 2026-02-28 00:51:27.292245 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-28 00:51:27.292256 | orchestrator | Saturday 28 February 2026 00:46:57 +0000 (0:00:02.087) 0:00:20.301 ***** 2026-02-28 00:51:27.292265 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.292274 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.292283 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.292292 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.292301 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.292310 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.292319 | orchestrator | 2026-02-28 00:51:27.292328 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-28 00:51:27.292337 | orchestrator | Saturday 28 February 2026 00:46:58 +0000 (0:00:00.992) 0:00:21.293 ***** 2026-02-28 00:51:27.292346 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-28 00:51:27.292355 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-28 00:51:27.292364 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.292373 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-28 00:51:27.292382 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-28 00:51:27.292391 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.292400 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-28 00:51:27.292409 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-28 00:51:27.292418 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.292427 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-28 00:51:27.292436 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-28 00:51:27.292445 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.292454 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-28 00:51:27.292463 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-28 00:51:27.292472 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.292481 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-28 00:51:27.292490 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-28 00:51:27.292504 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.292513 | orchestrator | 2026-02-28 00:51:27.292523 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-28 00:51:27.292537 | orchestrator | Saturday 28 February 2026 00:47:00 +0000 (0:00:01.864) 0:00:23.158 ***** 2026-02-28 00:51:27.292545 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.292553 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.292561 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.292570 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.292577 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.292585 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.292593 | orchestrator | 2026-02-28 00:51:27.292601 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-28 00:51:27.292614 | orchestrator | Saturday 28 February 2026 00:47:01 +0000 (0:00:01.003) 0:00:24.161 ***** 2026-02-28 00:51:27.292622 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.292630 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.292638 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.292647 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.292655 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.292664 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.292673 | orchestrator | 2026-02-28 00:51:27.292682 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-28 00:51:27.292690 | orchestrator | 2026-02-28 00:51:27.292735 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-28 00:51:27.292745 | orchestrator | Saturday 28 February 2026 00:47:03 +0000 (0:00:02.115) 0:00:26.277 ***** 2026-02-28 00:51:27.292754 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.292763 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.292771 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.292780 | orchestrator | 2026-02-28 00:51:27.292788 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-28 00:51:27.292797 | orchestrator | Saturday 28 February 2026 00:47:05 +0000 (0:00:01.969) 0:00:28.246 ***** 2026-02-28 00:51:27.292805 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.292814 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.292822 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.292830 | orchestrator | 2026-02-28 00:51:27.292839 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-28 00:51:27.292847 | orchestrator | Saturday 28 February 2026 00:47:06 +0000 (0:00:01.122) 0:00:29.369 ***** 2026-02-28 00:51:27.292857 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.292865 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.292874 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.292882 | orchestrator | 2026-02-28 00:51:27.292891 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-28 00:51:27.292900 | orchestrator | Saturday 28 February 2026 00:47:07 +0000 (0:00:00.999) 0:00:30.369 ***** 2026-02-28 00:51:27.292908 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.292917 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.292926 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.292934 | orchestrator | 2026-02-28 00:51:27.292943 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-28 00:51:27.292952 | orchestrator | Saturday 28 February 2026 00:47:09 +0000 (0:00:01.031) 0:00:31.401 ***** 2026-02-28 00:51:27.292961 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.292970 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.292979 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.292987 | orchestrator | 2026-02-28 00:51:27.292995 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-28 00:51:27.293004 | orchestrator | Saturday 28 February 2026 00:47:09 +0000 (0:00:00.395) 0:00:31.797 ***** 2026-02-28 00:51:27.293012 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:27.293021 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:27.293037 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.293046 | orchestrator | 2026-02-28 00:51:27.293055 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-28 00:51:27.293064 | orchestrator | Saturday 28 February 2026 00:47:10 +0000 (0:00:01.236) 0:00:33.033 ***** 2026-02-28 00:51:27.293072 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:27.293081 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.293089 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:27.293097 | orchestrator | 2026-02-28 00:51:27.293105 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-28 00:51:27.293115 | orchestrator | Saturday 28 February 2026 00:47:12 +0000 (0:00:02.032) 0:00:35.066 ***** 2026-02-28 00:51:27.293123 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:51:27.293132 | orchestrator | 2026-02-28 00:51:27.293140 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-28 00:51:27.293150 | orchestrator | Saturday 28 February 2026 00:47:13 +0000 (0:00:00.573) 0:00:35.639 ***** 2026-02-28 00:51:27.293158 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.293168 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.293177 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.293209 | orchestrator | 2026-02-28 00:51:27.293219 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-28 00:51:27.293228 | orchestrator | Saturday 28 February 2026 00:47:14 +0000 (0:00:01.575) 0:00:37.215 ***** 2026-02-28 00:51:27.293237 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.293246 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.293254 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.293263 | orchestrator | 2026-02-28 00:51:27.293271 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-28 00:51:27.293279 | orchestrator | Saturday 28 February 2026 00:47:15 +0000 (0:00:00.750) 0:00:37.965 ***** 2026-02-28 00:51:27.293288 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.293296 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.293305 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.293313 | orchestrator | 2026-02-28 00:51:27.293322 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-28 00:51:27.293331 | orchestrator | Saturday 28 February 2026 00:47:16 +0000 (0:00:00.936) 0:00:38.901 ***** 2026-02-28 00:51:27.293340 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.293348 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.293357 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.293366 | orchestrator | 2026-02-28 00:51:27.293374 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-28 00:51:27.293391 | orchestrator | Saturday 28 February 2026 00:47:18 +0000 (0:00:01.821) 0:00:40.723 ***** 2026-02-28 00:51:27.293400 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.293408 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.293417 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.293426 | orchestrator | 2026-02-28 00:51:27.293436 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-28 00:51:27.293446 | orchestrator | Saturday 28 February 2026 00:47:19 +0000 (0:00:00.869) 0:00:41.592 ***** 2026-02-28 00:51:27.293455 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.293469 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.293478 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.293487 | orchestrator | 2026-02-28 00:51:27.293496 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-28 00:51:27.293505 | orchestrator | Saturday 28 February 2026 00:47:19 +0000 (0:00:00.777) 0:00:42.370 ***** 2026-02-28 00:51:27.293514 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:27.293522 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:27.293531 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.293547 | orchestrator | 2026-02-28 00:51:27.293556 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-28 00:51:27.293565 | orchestrator | Saturday 28 February 2026 00:47:22 +0000 (0:00:02.390) 0:00:44.760 ***** 2026-02-28 00:51:27.293574 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.293583 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.293592 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.293601 | orchestrator | 2026-02-28 00:51:27.293610 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-28 00:51:27.293619 | orchestrator | Saturday 28 February 2026 00:47:25 +0000 (0:00:03.369) 0:00:48.130 ***** 2026-02-28 00:51:27.293628 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.293637 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.293646 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.293655 | orchestrator | 2026-02-28 00:51:27.293664 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-28 00:51:27.293673 | orchestrator | Saturday 28 February 2026 00:47:26 +0000 (0:00:00.821) 0:00:48.951 ***** 2026-02-28 00:51:27.293683 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-28 00:51:27.293693 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-28 00:51:27.293702 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-28 00:51:27.293711 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-28 00:51:27.293719 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-28 00:51:27.293729 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-28 00:51:27.293738 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-28 00:51:27.293748 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-28 00:51:27.293758 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-28 00:51:27.293767 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-28 00:51:27.293790 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-28 00:51:27.293799 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-28 00:51:27.293817 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.293826 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.293835 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.293844 | orchestrator | 2026-02-28 00:51:27.293854 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-28 00:51:27.293862 | orchestrator | Saturday 28 February 2026 00:48:10 +0000 (0:00:43.517) 0:01:32.468 ***** 2026-02-28 00:51:27.293872 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.293880 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.293889 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.293897 | orchestrator | 2026-02-28 00:51:27.293906 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-28 00:51:27.293915 | orchestrator | Saturday 28 February 2026 00:48:10 +0000 (0:00:00.585) 0:01:33.054 ***** 2026-02-28 00:51:27.293930 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.293939 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:27.293947 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:27.293956 | orchestrator | 2026-02-28 00:51:27.293965 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-28 00:51:27.293974 | orchestrator | Saturday 28 February 2026 00:48:12 +0000 (0:00:01.663) 0:01:34.717 ***** 2026-02-28 00:51:27.293983 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.293992 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:27.294001 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:27.294009 | orchestrator | 2026-02-28 00:51:27.294080 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-28 00:51:27.294091 | orchestrator | Saturday 28 February 2026 00:48:14 +0000 (0:00:02.288) 0:01:37.006 ***** 2026-02-28 00:51:27.294100 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:27.294108 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.294118 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:27.294127 | orchestrator | 2026-02-28 00:51:27.294135 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-28 00:51:27.294149 | orchestrator | Saturday 28 February 2026 00:48:40 +0000 (0:00:25.590) 0:02:02.597 ***** 2026-02-28 00:51:27.294158 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.294167 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.294177 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.294200 | orchestrator | 2026-02-28 00:51:27.294209 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-28 00:51:27.294218 | orchestrator | Saturday 28 February 2026 00:48:41 +0000 (0:00:00.816) 0:02:03.413 ***** 2026-02-28 00:51:27.294227 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.294237 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.294246 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.294255 | orchestrator | 2026-02-28 00:51:27.294264 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-28 00:51:27.294272 | orchestrator | Saturday 28 February 2026 00:48:41 +0000 (0:00:00.702) 0:02:04.116 ***** 2026-02-28 00:51:27.294281 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.294289 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:27.294298 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:27.294307 | orchestrator | 2026-02-28 00:51:27.294316 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-28 00:51:27.294324 | orchestrator | Saturday 28 February 2026 00:48:42 +0000 (0:00:00.699) 0:02:04.815 ***** 2026-02-28 00:51:27.294333 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.294342 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.294351 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.294360 | orchestrator | 2026-02-28 00:51:27.294369 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-28 00:51:27.294377 | orchestrator | Saturday 28 February 2026 00:48:43 +0000 (0:00:01.244) 0:02:06.060 ***** 2026-02-28 00:51:27.294385 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.294394 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.294402 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.294410 | orchestrator | 2026-02-28 00:51:27.294419 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-28 00:51:27.294447 | orchestrator | Saturday 28 February 2026 00:48:44 +0000 (0:00:00.539) 0:02:06.599 ***** 2026-02-28 00:51:27.294456 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.294464 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:27.294473 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:27.294481 | orchestrator | 2026-02-28 00:51:27.294490 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-28 00:51:27.294498 | orchestrator | Saturday 28 February 2026 00:48:45 +0000 (0:00:00.913) 0:02:07.513 ***** 2026-02-28 00:51:27.294506 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.294514 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:27.294530 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:27.294539 | orchestrator | 2026-02-28 00:51:27.294547 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-28 00:51:27.294556 | orchestrator | Saturday 28 February 2026 00:48:46 +0000 (0:00:01.241) 0:02:08.755 ***** 2026-02-28 00:51:27.294565 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.294573 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:27.294581 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:27.294590 | orchestrator | 2026-02-28 00:51:27.294599 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-28 00:51:27.294607 | orchestrator | Saturday 28 February 2026 00:48:47 +0000 (0:00:01.578) 0:02:10.334 ***** 2026-02-28 00:51:27.294616 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:27.294624 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:27.294632 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:27.294640 | orchestrator | 2026-02-28 00:51:27.294649 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-28 00:51:27.294657 | orchestrator | Saturday 28 February 2026 00:48:48 +0000 (0:00:00.953) 0:02:11.288 ***** 2026-02-28 00:51:27.294666 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.294673 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.294682 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.294689 | orchestrator | 2026-02-28 00:51:27.294698 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-28 00:51:27.294705 | orchestrator | Saturday 28 February 2026 00:48:49 +0000 (0:00:00.395) 0:02:11.683 ***** 2026-02-28 00:51:27.294713 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.294722 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.294729 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.294737 | orchestrator | 2026-02-28 00:51:27.294745 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-28 00:51:27.294753 | orchestrator | Saturday 28 February 2026 00:48:49 +0000 (0:00:00.355) 0:02:12.038 ***** 2026-02-28 00:51:27.294762 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.294770 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.294778 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.294785 | orchestrator | 2026-02-28 00:51:27.294793 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-28 00:51:27.294801 | orchestrator | Saturday 28 February 2026 00:48:50 +0000 (0:00:01.145) 0:02:13.184 ***** 2026-02-28 00:51:27.294809 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.294818 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.294826 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.294834 | orchestrator | 2026-02-28 00:51:27.294843 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-28 00:51:27.294852 | orchestrator | Saturday 28 February 2026 00:48:51 +0000 (0:00:00.705) 0:02:13.889 ***** 2026-02-28 00:51:27.294860 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-28 00:51:27.294875 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-28 00:51:27.294883 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-28 00:51:27.294892 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-28 00:51:27.294901 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-28 00:51:27.294916 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-28 00:51:27.294925 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-28 00:51:27.294934 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-28 00:51:27.294942 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-28 00:51:27.294957 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-28 00:51:27.294966 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-28 00:51:27.294973 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-28 00:51:27.294982 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-28 00:51:27.294989 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-28 00:51:27.294997 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-28 00:51:27.295005 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-28 00:51:27.295013 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-28 00:51:27.295021 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-28 00:51:27.295029 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-28 00:51:27.295037 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-28 00:51:27.295046 | orchestrator | 2026-02-28 00:51:27.295054 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-28 00:51:27.295063 | orchestrator | 2026-02-28 00:51:27.295071 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-28 00:51:27.295080 | orchestrator | Saturday 28 February 2026 00:48:54 +0000 (0:00:03.053) 0:02:16.943 ***** 2026-02-28 00:51:27.295088 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:51:27.295097 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:51:27.295105 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:51:27.295114 | orchestrator | 2026-02-28 00:51:27.295122 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-28 00:51:27.295131 | orchestrator | Saturday 28 February 2026 00:48:55 +0000 (0:00:00.643) 0:02:17.586 ***** 2026-02-28 00:51:27.295140 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:51:27.295149 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:51:27.295157 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:51:27.295166 | orchestrator | 2026-02-28 00:51:27.295175 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-28 00:51:27.295183 | orchestrator | Saturday 28 February 2026 00:48:55 +0000 (0:00:00.665) 0:02:18.252 ***** 2026-02-28 00:51:27.295217 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:51:27.295226 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:51:27.295235 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:51:27.295243 | orchestrator | 2026-02-28 00:51:27.295252 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-28 00:51:27.295260 | orchestrator | Saturday 28 February 2026 00:48:56 +0000 (0:00:00.362) 0:02:18.615 ***** 2026-02-28 00:51:27.295268 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:51:27.295277 | orchestrator | 2026-02-28 00:51:27.295285 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-28 00:51:27.295294 | orchestrator | Saturday 28 February 2026 00:48:56 +0000 (0:00:00.739) 0:02:19.354 ***** 2026-02-28 00:51:27.295302 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.295311 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.295320 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.295328 | orchestrator | 2026-02-28 00:51:27.295337 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-28 00:51:27.295345 | orchestrator | Saturday 28 February 2026 00:48:57 +0000 (0:00:00.334) 0:02:19.688 ***** 2026-02-28 00:51:27.295353 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.295369 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.295378 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.295386 | orchestrator | 2026-02-28 00:51:27.295395 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-28 00:51:27.295404 | orchestrator | Saturday 28 February 2026 00:48:57 +0000 (0:00:00.334) 0:02:20.022 ***** 2026-02-28 00:51:27.295412 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.295421 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.295429 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.295438 | orchestrator | 2026-02-28 00:51:27.295446 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-28 00:51:27.295455 | orchestrator | Saturday 28 February 2026 00:48:57 +0000 (0:00:00.311) 0:02:20.334 ***** 2026-02-28 00:51:27.295463 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:27.295471 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:27.295480 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:27.295488 | orchestrator | 2026-02-28 00:51:27.295505 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-28 00:51:27.295514 | orchestrator | Saturday 28 February 2026 00:48:58 +0000 (0:00:00.853) 0:02:21.188 ***** 2026-02-28 00:51:27.295523 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:27.295531 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:27.295540 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:27.295549 | orchestrator | 2026-02-28 00:51:27.295557 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-28 00:51:27.296090 | orchestrator | Saturday 28 February 2026 00:48:59 +0000 (0:00:01.059) 0:02:22.247 ***** 2026-02-28 00:51:27.296172 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:27.296233 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:27.296244 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:27.296252 | orchestrator | 2026-02-28 00:51:27.296260 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-28 00:51:27.296274 | orchestrator | Saturday 28 February 2026 00:49:01 +0000 (0:00:01.191) 0:02:23.439 ***** 2026-02-28 00:51:27.296282 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:27.296290 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:27.296298 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:27.296305 | orchestrator | 2026-02-28 00:51:27.296312 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-28 00:51:27.296320 | orchestrator | 2026-02-28 00:51:27.296328 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-28 00:51:27.296335 | orchestrator | Saturday 28 February 2026 00:49:11 +0000 (0:00:10.432) 0:02:33.872 ***** 2026-02-28 00:51:27.296343 | orchestrator | ok: [testbed-manager] 2026-02-28 00:51:27.296350 | orchestrator | 2026-02-28 00:51:27.296357 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-28 00:51:27.296365 | orchestrator | Saturday 28 February 2026 00:49:12 +0000 (0:00:00.996) 0:02:34.869 ***** 2026-02-28 00:51:27.296373 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:27.296380 | orchestrator | 2026-02-28 00:51:27.296388 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-28 00:51:27.296396 | orchestrator | Saturday 28 February 2026 00:49:13 +0000 (0:00:00.630) 0:02:35.499 ***** 2026-02-28 00:51:27.296403 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-28 00:51:27.296411 | orchestrator | 2026-02-28 00:51:27.296419 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-28 00:51:27.296427 | orchestrator | Saturday 28 February 2026 00:49:13 +0000 (0:00:00.559) 0:02:36.059 ***** 2026-02-28 00:51:27.296434 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:27.296442 | orchestrator | 2026-02-28 00:51:27.296450 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-28 00:51:27.296457 | orchestrator | Saturday 28 February 2026 00:49:14 +0000 (0:00:01.139) 0:02:37.198 ***** 2026-02-28 00:51:27.296464 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:27.296483 | orchestrator | 2026-02-28 00:51:27.296490 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-28 00:51:27.296497 | orchestrator | Saturday 28 February 2026 00:49:15 +0000 (0:00:00.656) 0:02:37.855 ***** 2026-02-28 00:51:27.296505 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-28 00:51:27.296513 | orchestrator | 2026-02-28 00:51:27.296520 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-28 00:51:27.296528 | orchestrator | Saturday 28 February 2026 00:49:17 +0000 (0:00:01.914) 0:02:39.770 ***** 2026-02-28 00:51:27.296535 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-28 00:51:27.296543 | orchestrator | 2026-02-28 00:51:27.296550 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-28 00:51:27.296558 | orchestrator | Saturday 28 February 2026 00:49:18 +0000 (0:00:01.001) 0:02:40.771 ***** 2026-02-28 00:51:27.296565 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:27.296573 | orchestrator | 2026-02-28 00:51:27.296580 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-28 00:51:27.296587 | orchestrator | Saturday 28 February 2026 00:49:19 +0000 (0:00:00.890) 0:02:41.661 ***** 2026-02-28 00:51:27.296595 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:27.296603 | orchestrator | 2026-02-28 00:51:27.296610 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-28 00:51:27.296617 | orchestrator | 2026-02-28 00:51:27.296624 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-28 00:51:27.296632 | orchestrator | Saturday 28 February 2026 00:49:19 +0000 (0:00:00.558) 0:02:42.220 ***** 2026-02-28 00:51:27.296639 | orchestrator | ok: [testbed-manager] 2026-02-28 00:51:27.296646 | orchestrator | 2026-02-28 00:51:27.296653 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-28 00:51:27.296661 | orchestrator | Saturday 28 February 2026 00:49:20 +0000 (0:00:00.234) 0:02:42.455 ***** 2026-02-28 00:51:27.296668 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:51:27.296676 | orchestrator | 2026-02-28 00:51:27.296684 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-28 00:51:27.296691 | orchestrator | Saturday 28 February 2026 00:49:20 +0000 (0:00:00.367) 0:02:42.823 ***** 2026-02-28 00:51:27.296698 | orchestrator | ok: [testbed-manager] 2026-02-28 00:51:27.296706 | orchestrator | 2026-02-28 00:51:27.296714 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-28 00:51:27.296721 | orchestrator | Saturday 28 February 2026 00:49:21 +0000 (0:00:01.237) 0:02:44.060 ***** 2026-02-28 00:51:27.296729 | orchestrator | ok: [testbed-manager] 2026-02-28 00:51:27.296736 | orchestrator | 2026-02-28 00:51:27.296743 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-28 00:51:27.296750 | orchestrator | Saturday 28 February 2026 00:49:24 +0000 (0:00:02.512) 0:02:46.573 ***** 2026-02-28 00:51:27.296757 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:27.296764 | orchestrator | 2026-02-28 00:51:27.296772 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-28 00:51:27.296779 | orchestrator | Saturday 28 February 2026 00:49:25 +0000 (0:00:01.153) 0:02:47.726 ***** 2026-02-28 00:51:27.296786 | orchestrator | ok: [testbed-manager] 2026-02-28 00:51:27.296793 | orchestrator | 2026-02-28 00:51:27.296812 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-28 00:51:27.296820 | orchestrator | Saturday 28 February 2026 00:49:25 +0000 (0:00:00.536) 0:02:48.262 ***** 2026-02-28 00:51:27.296827 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:27.296835 | orchestrator | 2026-02-28 00:51:27.296841 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-28 00:51:27.296849 | orchestrator | Saturday 28 February 2026 00:49:33 +0000 (0:00:07.759) 0:02:56.022 ***** 2026-02-28 00:51:27.296856 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:27.296863 | orchestrator | 2026-02-28 00:51:27.296877 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-28 00:51:27.296884 | orchestrator | Saturday 28 February 2026 00:49:49 +0000 (0:00:16.192) 0:03:12.214 ***** 2026-02-28 00:51:27.296892 | orchestrator | ok: [testbed-manager] 2026-02-28 00:51:27.296900 | orchestrator | 2026-02-28 00:51:27.296911 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-28 00:51:27.296919 | orchestrator | 2026-02-28 00:51:27.296926 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-28 00:51:27.296934 | orchestrator | Saturday 28 February 2026 00:49:50 +0000 (0:00:00.666) 0:03:12.881 ***** 2026-02-28 00:51:27.296942 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.296949 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.296956 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.296963 | orchestrator | 2026-02-28 00:51:27.296971 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-28 00:51:27.296978 | orchestrator | Saturday 28 February 2026 00:49:51 +0000 (0:00:00.498) 0:03:13.380 ***** 2026-02-28 00:51:27.296986 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.296993 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.297001 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.297009 | orchestrator | 2026-02-28 00:51:27.297016 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-28 00:51:27.297024 | orchestrator | Saturday 28 February 2026 00:49:51 +0000 (0:00:00.403) 0:03:13.784 ***** 2026-02-28 00:51:27.297031 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:51:27.297039 | orchestrator | 2026-02-28 00:51:27.297047 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-28 00:51:27.297055 | orchestrator | Saturday 28 February 2026 00:49:52 +0000 (0:00:01.131) 0:03:14.915 ***** 2026-02-28 00:51:27.297063 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-28 00:51:27.297071 | orchestrator | 2026-02-28 00:51:27.297078 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-28 00:51:27.297086 | orchestrator | Saturday 28 February 2026 00:49:53 +0000 (0:00:01.265) 0:03:16.181 ***** 2026-02-28 00:51:27.297094 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 00:51:27.297102 | orchestrator | 2026-02-28 00:51:27.297110 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-28 00:51:27.297117 | orchestrator | Saturday 28 February 2026 00:49:54 +0000 (0:00:01.073) 0:03:17.255 ***** 2026-02-28 00:51:27.297125 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.297132 | orchestrator | 2026-02-28 00:51:27.297139 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-28 00:51:27.297147 | orchestrator | Saturday 28 February 2026 00:49:55 +0000 (0:00:00.121) 0:03:17.376 ***** 2026-02-28 00:51:27.297155 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 00:51:27.297163 | orchestrator | 2026-02-28 00:51:27.297170 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-28 00:51:27.297178 | orchestrator | Saturday 28 February 2026 00:49:56 +0000 (0:00:01.051) 0:03:18.427 ***** 2026-02-28 00:51:27.297202 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.297211 | orchestrator | 2026-02-28 00:51:27.297218 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-28 00:51:27.297226 | orchestrator | Saturday 28 February 2026 00:49:56 +0000 (0:00:00.178) 0:03:18.606 ***** 2026-02-28 00:51:27.297234 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.297241 | orchestrator | 2026-02-28 00:51:27.297249 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-28 00:51:27.297256 | orchestrator | Saturday 28 February 2026 00:49:56 +0000 (0:00:00.140) 0:03:18.747 ***** 2026-02-28 00:51:27.297264 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.297271 | orchestrator | 2026-02-28 00:51:27.297279 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-28 00:51:27.297293 | orchestrator | Saturday 28 February 2026 00:49:56 +0000 (0:00:00.139) 0:03:18.886 ***** 2026-02-28 00:51:27.297301 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.297309 | orchestrator | 2026-02-28 00:51:27.297316 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-28 00:51:27.297324 | orchestrator | Saturday 28 February 2026 00:49:56 +0000 (0:00:00.106) 0:03:18.993 ***** 2026-02-28 00:51:27.297331 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-28 00:51:27.297338 | orchestrator | 2026-02-28 00:51:27.297346 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-28 00:51:27.297353 | orchestrator | Saturday 28 February 2026 00:50:02 +0000 (0:00:05.874) 0:03:24.867 ***** 2026-02-28 00:51:27.297361 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-28 00:51:27.297369 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-28 00:51:27.297378 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-28 00:51:27.297386 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-28 00:51:27.297393 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-28 00:51:27.297401 | orchestrator | 2026-02-28 00:51:27.297408 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-28 00:51:27.297415 | orchestrator | Saturday 28 February 2026 00:50:45 +0000 (0:00:42.695) 0:04:07.563 ***** 2026-02-28 00:51:27.297428 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 00:51:27.297436 | orchestrator | 2026-02-28 00:51:27.297443 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-28 00:51:27.297450 | orchestrator | Saturday 28 February 2026 00:50:46 +0000 (0:00:01.452) 0:04:09.015 ***** 2026-02-28 00:51:27.297458 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-28 00:51:27.297465 | orchestrator | 2026-02-28 00:51:27.297473 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-28 00:51:27.297481 | orchestrator | Saturday 28 February 2026 00:50:48 +0000 (0:00:01.659) 0:04:10.675 ***** 2026-02-28 00:51:27.297489 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-28 00:51:27.297497 | orchestrator | 2026-02-28 00:51:27.297504 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-28 00:51:27.297512 | orchestrator | Saturday 28 February 2026 00:50:49 +0000 (0:00:01.010) 0:04:11.685 ***** 2026-02-28 00:51:27.297524 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.297532 | orchestrator | 2026-02-28 00:51:27.297540 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-28 00:51:27.297547 | orchestrator | Saturday 28 February 2026 00:50:49 +0000 (0:00:00.122) 0:04:11.808 ***** 2026-02-28 00:51:27.297555 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-28 00:51:27.297563 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-28 00:51:27.297570 | orchestrator | 2026-02-28 00:51:27.297578 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-28 00:51:27.297585 | orchestrator | Saturday 28 February 2026 00:50:51 +0000 (0:00:01.967) 0:04:13.776 ***** 2026-02-28 00:51:27.297593 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.297600 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.297608 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.297615 | orchestrator | 2026-02-28 00:51:27.297623 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-28 00:51:27.297631 | orchestrator | Saturday 28 February 2026 00:50:51 +0000 (0:00:00.319) 0:04:14.095 ***** 2026-02-28 00:51:27.297638 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.297646 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.297653 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.297661 | orchestrator | 2026-02-28 00:51:27.297668 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-28 00:51:27.297682 | orchestrator | 2026-02-28 00:51:27.297690 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-28 00:51:27.297697 | orchestrator | Saturday 28 February 2026 00:50:52 +0000 (0:00:01.251) 0:04:15.347 ***** 2026-02-28 00:51:27.297704 | orchestrator | ok: [testbed-manager] 2026-02-28 00:51:27.297712 | orchestrator | 2026-02-28 00:51:27.297720 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-28 00:51:27.297727 | orchestrator | Saturday 28 February 2026 00:50:53 +0000 (0:00:00.185) 0:04:15.533 ***** 2026-02-28 00:51:27.297735 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:51:27.297743 | orchestrator | 2026-02-28 00:51:27.297750 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-28 00:51:27.297757 | orchestrator | Saturday 28 February 2026 00:50:53 +0000 (0:00:00.232) 0:04:15.766 ***** 2026-02-28 00:51:27.297765 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:27.297772 | orchestrator | 2026-02-28 00:51:27.297780 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-28 00:51:27.297787 | orchestrator | 2026-02-28 00:51:27.297794 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-28 00:51:27.297802 | orchestrator | Saturday 28 February 2026 00:51:01 +0000 (0:00:07.676) 0:04:23.442 ***** 2026-02-28 00:51:27.297809 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:51:27.297817 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:51:27.297824 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:51:27.297832 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:27.297840 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:27.297847 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:27.297855 | orchestrator | 2026-02-28 00:51:27.297862 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-28 00:51:27.297869 | orchestrator | Saturday 28 February 2026 00:51:02 +0000 (0:00:01.324) 0:04:24.767 ***** 2026-02-28 00:51:27.297877 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-28 00:51:27.297884 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-28 00:51:27.297892 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-28 00:51:27.297900 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-28 00:51:27.297907 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-28 00:51:27.297914 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-28 00:51:27.297922 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-28 00:51:27.297929 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-28 00:51:27.297937 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-28 00:51:27.297945 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-28 00:51:27.297952 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-28 00:51:27.297960 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-28 00:51:27.297971 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-28 00:51:27.297979 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-28 00:51:27.297987 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-28 00:51:27.297994 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-28 00:51:27.298001 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-28 00:51:27.298013 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-28 00:51:27.298170 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-28 00:51:27.298185 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-28 00:51:27.298241 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-28 00:51:27.298248 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-28 00:51:27.298256 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-28 00:51:27.298263 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-28 00:51:27.298271 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-28 00:51:27.298316 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-28 00:51:27.298325 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-28 00:51:27.298333 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-28 00:51:27.298341 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-28 00:51:27.298349 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-28 00:51:27.298357 | orchestrator | 2026-02-28 00:51:27.298366 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-28 00:51:27.298374 | orchestrator | Saturday 28 February 2026 00:51:24 +0000 (0:00:22.338) 0:04:47.106 ***** 2026-02-28 00:51:27.298382 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.298390 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.298398 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.298406 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.298415 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.298423 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.298432 | orchestrator | 2026-02-28 00:51:27.298440 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-28 00:51:27.298448 | orchestrator | Saturday 28 February 2026 00:51:25 +0000 (0:00:00.945) 0:04:48.051 ***** 2026-02-28 00:51:27.298455 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:27.298463 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:27.298470 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:27.298477 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:27.298485 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:27.298492 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:27.298499 | orchestrator | 2026-02-28 00:51:27.298507 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:51:27.298514 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:51:27.298524 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-28 00:51:27.298532 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-28 00:51:27.298540 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-28 00:51:27.298548 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-28 00:51:27.298556 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-28 00:51:27.298574 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-28 00:51:27.298581 | orchestrator | 2026-02-28 00:51:27.298588 | orchestrator | 2026-02-28 00:51:27.298596 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:51:27.298603 | orchestrator | Saturday 28 February 2026 00:51:26 +0000 (0:00:00.501) 0:04:48.553 ***** 2026-02-28 00:51:27.298611 | orchestrator | =============================================================================== 2026-02-28 00:51:27.298618 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.52s 2026-02-28 00:51:27.298627 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.70s 2026-02-28 00:51:27.298634 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.59s 2026-02-28 00:51:27.298649 | orchestrator | Manage labels ---------------------------------------------------------- 22.34s 2026-02-28 00:51:27.298656 | orchestrator | kubectl : Install required packages ------------------------------------ 16.19s 2026-02-28 00:51:27.298663 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.43s 2026-02-28 00:51:27.298671 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.76s 2026-02-28 00:51:27.298678 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 7.68s 2026-02-28 00:51:27.298686 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.87s 2026-02-28 00:51:27.298694 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.65s 2026-02-28 00:51:27.298701 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.37s 2026-02-28 00:51:27.298713 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.05s 2026-02-28 00:51:27.298720 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.53s 2026-02-28 00:51:27.298727 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.51s 2026-02-28 00:51:27.298734 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.39s 2026-02-28 00:51:27.298742 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 2.29s 2026-02-28 00:51:27.298750 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.12s 2026-02-28 00:51:27.298758 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.09s 2026-02-28 00:51:27.298766 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.03s 2026-02-28 00:51:27.298773 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.97s 2026-02-28 00:51:27.298781 | orchestrator | 2026-02-28 00:51:27 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:27.298790 | orchestrator | 2026-02-28 00:51:27 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:27.298843 | orchestrator | 2026-02-28 00:51:27 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:27.300749 | orchestrator | 2026-02-28 00:51:27 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:27.300880 | orchestrator | 2026-02-28 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:30.353083 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:30.353302 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:30.354683 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:30.355869 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:30.357093 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task 6989357d-f8f2-454f-af55-eb832a60966a is in state STARTED 2026-02-28 00:51:30.358461 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task 1a749253-88bb-49d7-aa98-0a7d738c2fc7 is in state STARTED 2026-02-28 00:51:30.358586 | orchestrator | 2026-02-28 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:33.418407 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:33.423288 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:33.426450 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:33.429810 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:33.430453 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task 6989357d-f8f2-454f-af55-eb832a60966a is in state STARTED 2026-02-28 00:51:33.432350 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task 1a749253-88bb-49d7-aa98-0a7d738c2fc7 is in state STARTED 2026-02-28 00:51:33.432413 | orchestrator | 2026-02-28 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:36.505525 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:36.505620 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:36.505708 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:36.505732 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:36.505751 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task 6989357d-f8f2-454f-af55-eb832a60966a is in state STARTED 2026-02-28 00:51:36.505864 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task 1a749253-88bb-49d7-aa98-0a7d738c2fc7 is in state STARTED 2026-02-28 00:51:36.505884 | orchestrator | 2026-02-28 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:39.544463 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:39.545419 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:39.547381 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:39.549496 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:39.551440 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task 6989357d-f8f2-454f-af55-eb832a60966a is in state STARTED 2026-02-28 00:51:39.552172 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task 1a749253-88bb-49d7-aa98-0a7d738c2fc7 is in state SUCCESS 2026-02-28 00:51:39.552205 | orchestrator | 2026-02-28 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:42.598536 | orchestrator | 2026-02-28 00:51:42 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:42.599480 | orchestrator | 2026-02-28 00:51:42 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:42.600765 | orchestrator | 2026-02-28 00:51:42 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:42.602309 | orchestrator | 2026-02-28 00:51:42 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:42.602979 | orchestrator | 2026-02-28 00:51:42 | INFO  | Task 6989357d-f8f2-454f-af55-eb832a60966a is in state SUCCESS 2026-02-28 00:51:42.603495 | orchestrator | 2026-02-28 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:45.652356 | orchestrator | 2026-02-28 00:51:45 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:45.653944 | orchestrator | 2026-02-28 00:51:45 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:45.658392 | orchestrator | 2026-02-28 00:51:45 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:45.661362 | orchestrator | 2026-02-28 00:51:45 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:45.661642 | orchestrator | 2026-02-28 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:48.692659 | orchestrator | 2026-02-28 00:51:48 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:48.692904 | orchestrator | 2026-02-28 00:51:48 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:48.694830 | orchestrator | 2026-02-28 00:51:48 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:48.696792 | orchestrator | 2026-02-28 00:51:48 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:48.696839 | orchestrator | 2026-02-28 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:51.728327 | orchestrator | 2026-02-28 00:51:51 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:51.729294 | orchestrator | 2026-02-28 00:51:51 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:51.730141 | orchestrator | 2026-02-28 00:51:51 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:51.730985 | orchestrator | 2026-02-28 00:51:51 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:51.731047 | orchestrator | 2026-02-28 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:54.809644 | orchestrator | 2026-02-28 00:51:54 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:54.809748 | orchestrator | 2026-02-28 00:51:54 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:54.809764 | orchestrator | 2026-02-28 00:51:54 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:54.809776 | orchestrator | 2026-02-28 00:51:54 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:54.809788 | orchestrator | 2026-02-28 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:57.855677 | orchestrator | 2026-02-28 00:51:57 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:51:57.857299 | orchestrator | 2026-02-28 00:51:57 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:51:57.859139 | orchestrator | 2026-02-28 00:51:57 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:51:57.860887 | orchestrator | 2026-02-28 00:51:57 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:51:57.861059 | orchestrator | 2026-02-28 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:00.891043 | orchestrator | 2026-02-28 00:52:00 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:00.891318 | orchestrator | 2026-02-28 00:52:00 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:00.892005 | orchestrator | 2026-02-28 00:52:00 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:00.892673 | orchestrator | 2026-02-28 00:52:00 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:52:00.892693 | orchestrator | 2026-02-28 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:03.931761 | orchestrator | 2026-02-28 00:52:03 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:03.933942 | orchestrator | 2026-02-28 00:52:03 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:03.940023 | orchestrator | 2026-02-28 00:52:03 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:03.941120 | orchestrator | 2026-02-28 00:52:03 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state STARTED 2026-02-28 00:52:03.941147 | orchestrator | 2026-02-28 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:06.971823 | orchestrator | 2026-02-28 00:52:06 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:06.973776 | orchestrator | 2026-02-28 00:52:06 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:06.975842 | orchestrator | 2026-02-28 00:52:06 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:06.979430 | orchestrator | 2026-02-28 00:52:06 | INFO  | Task 9b425cad-23ee-426b-8744-b479b633fdff is in state SUCCESS 2026-02-28 00:52:06.981523 | orchestrator | 2026-02-28 00:52:06.981567 | orchestrator | 2026-02-28 00:52:06.981580 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-28 00:52:06.981593 | orchestrator | 2026-02-28 00:52:06.981604 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-28 00:52:06.981615 | orchestrator | Saturday 28 February 2026 00:51:32 +0000 (0:00:00.293) 0:00:00.293 ***** 2026-02-28 00:52:06.981627 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-28 00:52:06.981638 | orchestrator | 2026-02-28 00:52:06.981649 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-28 00:52:06.981660 | orchestrator | Saturday 28 February 2026 00:51:33 +0000 (0:00:00.951) 0:00:01.244 ***** 2026-02-28 00:52:06.981671 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:06.981683 | orchestrator | 2026-02-28 00:52:06.981694 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-28 00:52:06.981705 | orchestrator | Saturday 28 February 2026 00:51:35 +0000 (0:00:01.854) 0:00:03.098 ***** 2026-02-28 00:52:06.981716 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:06.981727 | orchestrator | 2026-02-28 00:52:06.981738 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:52:06.981749 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:52:06.981762 | orchestrator | 2026-02-28 00:52:06.981772 | orchestrator | 2026-02-28 00:52:06.981783 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:52:06.981794 | orchestrator | Saturday 28 February 2026 00:51:36 +0000 (0:00:00.866) 0:00:03.965 ***** 2026-02-28 00:52:06.981805 | orchestrator | =============================================================================== 2026-02-28 00:52:06.981816 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.85s 2026-02-28 00:52:06.981827 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.95s 2026-02-28 00:52:06.981838 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.87s 2026-02-28 00:52:06.981872 | orchestrator | 2026-02-28 00:52:06.981884 | orchestrator | 2026-02-28 00:52:06.981895 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-28 00:52:06.981905 | orchestrator | 2026-02-28 00:52:06.981916 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-28 00:52:06.981927 | orchestrator | Saturday 28 February 2026 00:51:32 +0000 (0:00:00.178) 0:00:00.178 ***** 2026-02-28 00:52:06.981938 | orchestrator | ok: [testbed-manager] 2026-02-28 00:52:06.981949 | orchestrator | 2026-02-28 00:52:06.981960 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-28 00:52:06.981970 | orchestrator | Saturday 28 February 2026 00:51:32 +0000 (0:00:00.859) 0:00:01.038 ***** 2026-02-28 00:52:06.981981 | orchestrator | ok: [testbed-manager] 2026-02-28 00:52:06.981992 | orchestrator | 2026-02-28 00:52:06.982003 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-28 00:52:06.982014 | orchestrator | Saturday 28 February 2026 00:51:33 +0000 (0:00:00.791) 0:00:01.830 ***** 2026-02-28 00:52:06.982078 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-28 00:52:06.982089 | orchestrator | 2026-02-28 00:52:06.982103 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-28 00:52:06.982116 | orchestrator | Saturday 28 February 2026 00:51:34 +0000 (0:00:00.975) 0:00:02.805 ***** 2026-02-28 00:52:06.982128 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:06.982141 | orchestrator | 2026-02-28 00:52:06.982154 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-28 00:52:06.982178 | orchestrator | Saturday 28 February 2026 00:51:36 +0000 (0:00:02.004) 0:00:04.810 ***** 2026-02-28 00:52:06.982191 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:06.982204 | orchestrator | 2026-02-28 00:52:06.982217 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-28 00:52:06.982229 | orchestrator | Saturday 28 February 2026 00:51:37 +0000 (0:00:00.624) 0:00:05.434 ***** 2026-02-28 00:52:06.982241 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-28 00:52:06.982253 | orchestrator | 2026-02-28 00:52:06.982299 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-28 00:52:06.982314 | orchestrator | Saturday 28 February 2026 00:51:39 +0000 (0:00:01.932) 0:00:07.367 ***** 2026-02-28 00:52:06.982326 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-28 00:52:06.982339 | orchestrator | 2026-02-28 00:52:06.982352 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-28 00:52:06.982364 | orchestrator | Saturday 28 February 2026 00:51:40 +0000 (0:00:00.878) 0:00:08.245 ***** 2026-02-28 00:52:06.982377 | orchestrator | ok: [testbed-manager] 2026-02-28 00:52:06.982390 | orchestrator | 2026-02-28 00:52:06.982402 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-28 00:52:06.982413 | orchestrator | Saturday 28 February 2026 00:51:40 +0000 (0:00:00.609) 0:00:08.855 ***** 2026-02-28 00:52:06.982423 | orchestrator | ok: [testbed-manager] 2026-02-28 00:52:06.982434 | orchestrator | 2026-02-28 00:52:06.982445 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:52:06.982456 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:52:06.982467 | orchestrator | 2026-02-28 00:52:06.982478 | orchestrator | 2026-02-28 00:52:06.982489 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:52:06.982500 | orchestrator | Saturday 28 February 2026 00:51:41 +0000 (0:00:00.344) 0:00:09.199 ***** 2026-02-28 00:52:06.982510 | orchestrator | =============================================================================== 2026-02-28 00:52:06.982521 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.00s 2026-02-28 00:52:06.982532 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.93s 2026-02-28 00:52:06.982543 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.98s 2026-02-28 00:52:06.982579 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.88s 2026-02-28 00:52:06.982590 | orchestrator | Get home directory of operator user ------------------------------------- 0.86s 2026-02-28 00:52:06.982601 | orchestrator | Create .kube directory -------------------------------------------------- 0.79s 2026-02-28 00:52:06.982612 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.62s 2026-02-28 00:52:06.982622 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.61s 2026-02-28 00:52:06.982633 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.35s 2026-02-28 00:52:06.982644 | orchestrator | 2026-02-28 00:52:06.982655 | orchestrator | 2026-02-28 00:52:06.982666 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-02-28 00:52:06.982676 | orchestrator | 2026-02-28 00:52:06.982687 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-28 00:52:06.982698 | orchestrator | Saturday 28 February 2026 00:49:30 +0000 (0:00:00.412) 0:00:00.412 ***** 2026-02-28 00:52:06.982709 | orchestrator | ok: [localhost] => { 2026-02-28 00:52:06.982720 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-02-28 00:52:06.982732 | orchestrator | } 2026-02-28 00:52:06.982743 | orchestrator | 2026-02-28 00:52:06.982754 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-02-28 00:52:06.982765 | orchestrator | Saturday 28 February 2026 00:49:30 +0000 (0:00:00.122) 0:00:00.534 ***** 2026-02-28 00:52:06.982776 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-02-28 00:52:06.982788 | orchestrator | ...ignoring 2026-02-28 00:52:06.982799 | orchestrator | 2026-02-28 00:52:06.982809 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-02-28 00:52:06.982820 | orchestrator | Saturday 28 February 2026 00:49:34 +0000 (0:00:03.904) 0:00:04.439 ***** 2026-02-28 00:52:06.982831 | orchestrator | skipping: [localhost] 2026-02-28 00:52:06.982841 | orchestrator | 2026-02-28 00:52:06.982852 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-02-28 00:52:06.982863 | orchestrator | Saturday 28 February 2026 00:49:34 +0000 (0:00:00.132) 0:00:04.571 ***** 2026-02-28 00:52:06.982874 | orchestrator | ok: [localhost] 2026-02-28 00:52:06.982884 | orchestrator | 2026-02-28 00:52:06.982895 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:52:06.982906 | orchestrator | 2026-02-28 00:52:06.982917 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:52:06.982927 | orchestrator | Saturday 28 February 2026 00:49:35 +0000 (0:00:00.747) 0:00:05.319 ***** 2026-02-28 00:52:06.982938 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:06.982949 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:06.982960 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:06.982971 | orchestrator | 2026-02-28 00:52:06.982981 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:52:06.982992 | orchestrator | Saturday 28 February 2026 00:49:36 +0000 (0:00:01.196) 0:00:06.516 ***** 2026-02-28 00:52:06.983003 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-28 00:52:06.983014 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-28 00:52:06.983024 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-28 00:52:06.983035 | orchestrator | 2026-02-28 00:52:06.983046 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-28 00:52:06.983056 | orchestrator | 2026-02-28 00:52:06.983067 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-28 00:52:06.983084 | orchestrator | Saturday 28 February 2026 00:49:37 +0000 (0:00:00.797) 0:00:07.313 ***** 2026-02-28 00:52:06.983095 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:52:06.983112 | orchestrator | 2026-02-28 00:52:06.983123 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-28 00:52:06.983134 | orchestrator | Saturday 28 February 2026 00:49:38 +0000 (0:00:00.946) 0:00:08.259 ***** 2026-02-28 00:52:06.983145 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:06.983155 | orchestrator | 2026-02-28 00:52:06.983166 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-28 00:52:06.983177 | orchestrator | Saturday 28 February 2026 00:49:39 +0000 (0:00:01.313) 0:00:09.573 ***** 2026-02-28 00:52:06.983187 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:06.983198 | orchestrator | 2026-02-28 00:52:06.983209 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-28 00:52:06.983220 | orchestrator | Saturday 28 February 2026 00:49:40 +0000 (0:00:00.535) 0:00:10.108 ***** 2026-02-28 00:52:06.983231 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:06.983241 | orchestrator | 2026-02-28 00:52:06.983252 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-28 00:52:06.983297 | orchestrator | Saturday 28 February 2026 00:49:40 +0000 (0:00:00.505) 0:00:10.613 ***** 2026-02-28 00:52:06.983309 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:06.983320 | orchestrator | 2026-02-28 00:52:06.983331 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-28 00:52:06.983342 | orchestrator | Saturday 28 February 2026 00:49:41 +0000 (0:00:00.584) 0:00:11.198 ***** 2026-02-28 00:52:06.983352 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:06.983363 | orchestrator | 2026-02-28 00:52:06.983374 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-28 00:52:06.983385 | orchestrator | Saturday 28 February 2026 00:49:42 +0000 (0:00:00.943) 0:00:12.141 ***** 2026-02-28 00:52:06.983396 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:52:06.983407 | orchestrator | 2026-02-28 00:52:06.983418 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-28 00:52:06.983435 | orchestrator | Saturday 28 February 2026 00:49:43 +0000 (0:00:01.100) 0:00:13.242 ***** 2026-02-28 00:52:06.983447 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:06.983458 | orchestrator | 2026-02-28 00:52:06.983469 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-28 00:52:06.983480 | orchestrator | Saturday 28 February 2026 00:49:44 +0000 (0:00:01.255) 0:00:14.498 ***** 2026-02-28 00:52:06.983491 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:06.983501 | orchestrator | 2026-02-28 00:52:06.983512 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-28 00:52:06.983523 | orchestrator | Saturday 28 February 2026 00:49:45 +0000 (0:00:00.519) 0:00:15.017 ***** 2026-02-28 00:52:06.983534 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:06.983544 | orchestrator | 2026-02-28 00:52:06.983555 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-28 00:52:06.983567 | orchestrator | Saturday 28 February 2026 00:49:46 +0000 (0:00:01.132) 0:00:16.155 ***** 2026-02-28 00:52:06.983583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:06.983613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:06.983627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:06.983639 | orchestrator | 2026-02-28 00:52:06.983650 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-28 00:52:06.983662 | orchestrator | Saturday 28 February 2026 00:49:50 +0000 (0:00:03.644) 0:00:19.799 ***** 2026-02-28 00:52:06.983681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:06.983694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:06.983718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:06.983730 | orchestrator | 2026-02-28 00:52:06.983741 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-28 00:52:06.983752 | orchestrator | Saturday 28 February 2026 00:49:52 +0000 (0:00:02.613) 0:00:22.413 ***** 2026-02-28 00:52:06.983763 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-28 00:52:06.983775 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-28 00:52:06.983798 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-28 00:52:06.983809 | orchestrator | 2026-02-28 00:52:06.983831 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-28 00:52:06.983842 | orchestrator | Saturday 28 February 2026 00:49:55 +0000 (0:00:02.671) 0:00:25.085 ***** 2026-02-28 00:52:06.983853 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-28 00:52:06.983863 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-28 00:52:06.983875 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-28 00:52:06.983886 | orchestrator | 2026-02-28 00:52:06.983902 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-28 00:52:06.983913 | orchestrator | Saturday 28 February 2026 00:49:57 +0000 (0:00:02.415) 0:00:27.500 ***** 2026-02-28 00:52:06.983924 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-28 00:52:06.983935 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-28 00:52:06.983946 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-28 00:52:06.983956 | orchestrator | 2026-02-28 00:52:06.983967 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-28 00:52:06.983978 | orchestrator | Saturday 28 February 2026 00:50:01 +0000 (0:00:03.403) 0:00:30.904 ***** 2026-02-28 00:52:06.983989 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-28 00:52:06.984006 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-28 00:52:06.984017 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-28 00:52:06.984028 | orchestrator | 2026-02-28 00:52:06.984039 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-28 00:52:06.984049 | orchestrator | Saturday 28 February 2026 00:50:04 +0000 (0:00:03.386) 0:00:34.290 ***** 2026-02-28 00:52:06.984060 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-28 00:52:06.984071 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-28 00:52:06.984082 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-28 00:52:06.984093 | orchestrator | 2026-02-28 00:52:06.984104 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-28 00:52:06.984115 | orchestrator | Saturday 28 February 2026 00:50:06 +0000 (0:00:01.642) 0:00:35.933 ***** 2026-02-28 00:52:06.984125 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-28 00:52:06.984136 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-28 00:52:06.984147 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-28 00:52:06.984158 | orchestrator | 2026-02-28 00:52:06.984169 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-28 00:52:06.984180 | orchestrator | Saturday 28 February 2026 00:50:08 +0000 (0:00:02.116) 0:00:38.050 ***** 2026-02-28 00:52:06.984191 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:06.984202 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:06.984213 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:06.984224 | orchestrator | 2026-02-28 00:52:06.984235 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-28 00:52:06.984246 | orchestrator | Saturday 28 February 2026 00:50:08 +0000 (0:00:00.396) 0:00:38.446 ***** 2026-02-28 00:52:06.984295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:06.984319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:06.984339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:06.984352 | orchestrator | 2026-02-28 00:52:06.984363 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-28 00:52:06.984374 | orchestrator | Saturday 28 February 2026 00:50:10 +0000 (0:00:02.144) 0:00:40.591 ***** 2026-02-28 00:52:06.984385 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:06.984396 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:06.984407 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:06.984418 | orchestrator | 2026-02-28 00:52:06.984429 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-28 00:52:06.984440 | orchestrator | Saturday 28 February 2026 00:50:12 +0000 (0:00:01.212) 0:00:41.803 ***** 2026-02-28 00:52:06.984451 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:06.984462 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:06.984473 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:06.984485 | orchestrator | 2026-02-28 00:52:06.984496 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-28 00:52:06.984507 | orchestrator | Saturday 28 February 2026 00:50:19 +0000 (0:00:07.736) 0:00:49.539 ***** 2026-02-28 00:52:06.984518 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:06.984529 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:06.984540 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:06.984551 | orchestrator | 2026-02-28 00:52:06.984562 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-28 00:52:06.984573 | orchestrator | 2026-02-28 00:52:06.984585 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-28 00:52:06.984596 | orchestrator | Saturday 28 February 2026 00:50:20 +0000 (0:00:00.356) 0:00:49.895 ***** 2026-02-28 00:52:06.984607 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:06.984617 | orchestrator | 2026-02-28 00:52:06.984640 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-28 00:52:06.984652 | orchestrator | Saturday 28 February 2026 00:50:20 +0000 (0:00:00.609) 0:00:50.505 ***** 2026-02-28 00:52:06.984663 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:06.984674 | orchestrator | 2026-02-28 00:52:06.984685 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-28 00:52:06.984696 | orchestrator | Saturday 28 February 2026 00:50:21 +0000 (0:00:00.299) 0:00:50.804 ***** 2026-02-28 00:52:06.984707 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:06.984718 | orchestrator | 2026-02-28 00:52:06.984729 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-28 00:52:06.984741 | orchestrator | Saturday 28 February 2026 00:50:27 +0000 (0:00:06.648) 0:00:57.453 ***** 2026-02-28 00:52:06.984758 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:06.984769 | orchestrator | 2026-02-28 00:52:06.984780 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-28 00:52:06.984791 | orchestrator | 2026-02-28 00:52:06.984802 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-28 00:52:06.984813 | orchestrator | Saturday 28 February 2026 00:51:19 +0000 (0:00:51.984) 0:01:49.437 ***** 2026-02-28 00:52:06.984824 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:06.984835 | orchestrator | 2026-02-28 00:52:06.984846 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-28 00:52:06.984857 | orchestrator | Saturday 28 February 2026 00:51:20 +0000 (0:00:00.711) 0:01:50.149 ***** 2026-02-28 00:52:06.984868 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:06.984879 | orchestrator | 2026-02-28 00:52:06.984890 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-28 00:52:06.984901 | orchestrator | Saturday 28 February 2026 00:51:21 +0000 (0:00:00.831) 0:01:50.980 ***** 2026-02-28 00:52:06.984912 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:06.984923 | orchestrator | 2026-02-28 00:52:06.984934 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-28 00:52:06.984945 | orchestrator | Saturday 28 February 2026 00:51:24 +0000 (0:00:03.033) 0:01:54.014 ***** 2026-02-28 00:52:06.984955 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:06.984966 | orchestrator | 2026-02-28 00:52:06.984977 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-28 00:52:06.984988 | orchestrator | 2026-02-28 00:52:06.984999 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-28 00:52:06.985016 | orchestrator | Saturday 28 February 2026 00:51:41 +0000 (0:00:16.842) 0:02:10.857 ***** 2026-02-28 00:52:06.985028 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:06.985039 | orchestrator | 2026-02-28 00:52:06.985050 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-28 00:52:06.985061 | orchestrator | Saturday 28 February 2026 00:51:41 +0000 (0:00:00.626) 0:02:11.483 ***** 2026-02-28 00:52:06.985072 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:06.985082 | orchestrator | 2026-02-28 00:52:06.985093 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-28 00:52:06.985104 | orchestrator | Saturday 28 February 2026 00:51:42 +0000 (0:00:00.250) 0:02:11.734 ***** 2026-02-28 00:52:06.985115 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:06.985126 | orchestrator | 2026-02-28 00:52:06.985137 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-28 00:52:06.985148 | orchestrator | Saturday 28 February 2026 00:51:48 +0000 (0:00:06.856) 0:02:18.591 ***** 2026-02-28 00:52:06.985159 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:06.985170 | orchestrator | 2026-02-28 00:52:06.985181 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-28 00:52:06.985192 | orchestrator | 2026-02-28 00:52:06.985203 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-28 00:52:06.985214 | orchestrator | Saturday 28 February 2026 00:52:00 +0000 (0:00:11.829) 0:02:30.421 ***** 2026-02-28 00:52:06.985225 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:52:06.985236 | orchestrator | 2026-02-28 00:52:06.985247 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-28 00:52:06.985281 | orchestrator | Saturday 28 February 2026 00:52:01 +0000 (0:00:00.460) 0:02:30.881 ***** 2026-02-28 00:52:06.985297 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-28 00:52:06.985307 | orchestrator | enable_outward_rabbitmq_True 2026-02-28 00:52:06.985319 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-28 00:52:06.985329 | orchestrator | outward_rabbitmq_restart 2026-02-28 00:52:06.985340 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:06.985351 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:06.985370 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:06.985380 | orchestrator | 2026-02-28 00:52:06.985392 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-28 00:52:06.985402 | orchestrator | skipping: no hosts matched 2026-02-28 00:52:06.985413 | orchestrator | 2026-02-28 00:52:06.985424 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-28 00:52:06.985435 | orchestrator | skipping: no hosts matched 2026-02-28 00:52:06.985446 | orchestrator | 2026-02-28 00:52:06.985456 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-28 00:52:06.985467 | orchestrator | skipping: no hosts matched 2026-02-28 00:52:06.985478 | orchestrator | 2026-02-28 00:52:06.985489 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:52:06.985500 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-28 00:52:06.985511 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-28 00:52:06.985522 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:52:06.985533 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:52:06.985544 | orchestrator | 2026-02-28 00:52:06.985555 | orchestrator | 2026-02-28 00:52:06.985565 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:52:06.985576 | orchestrator | Saturday 28 February 2026 00:52:03 +0000 (0:00:02.792) 0:02:33.674 ***** 2026-02-28 00:52:06.985587 | orchestrator | =============================================================================== 2026-02-28 00:52:06.985598 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 80.66s 2026-02-28 00:52:06.985608 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 16.54s 2026-02-28 00:52:06.985619 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.74s 2026-02-28 00:52:06.985630 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.90s 2026-02-28 00:52:06.985640 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 3.64s 2026-02-28 00:52:06.985651 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 3.40s 2026-02-28 00:52:06.985662 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.39s 2026-02-28 00:52:06.985672 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.79s 2026-02-28 00:52:06.985683 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.67s 2026-02-28 00:52:06.985694 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.61s 2026-02-28 00:52:06.985705 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.42s 2026-02-28 00:52:06.985780 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.14s 2026-02-28 00:52:06.985800 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.12s 2026-02-28 00:52:06.985811 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.95s 2026-02-28 00:52:06.985822 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.64s 2026-02-28 00:52:06.985841 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.38s 2026-02-28 00:52:06.985852 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.31s 2026-02-28 00:52:06.985863 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.26s 2026-02-28 00:52:06.985874 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.21s 2026-02-28 00:52:06.985891 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.20s 2026-02-28 00:52:06.985902 | orchestrator | 2026-02-28 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:10.032849 | orchestrator | 2026-02-28 00:52:10 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:10.033702 | orchestrator | 2026-02-28 00:52:10 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:10.035517 | orchestrator | 2026-02-28 00:52:10 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:10.035688 | orchestrator | 2026-02-28 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:13.081705 | orchestrator | 2026-02-28 00:52:13 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:13.081806 | orchestrator | 2026-02-28 00:52:13 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:13.082665 | orchestrator | 2026-02-28 00:52:13 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:13.082693 | orchestrator | 2026-02-28 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:16.134181 | orchestrator | 2026-02-28 00:52:16 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:16.139640 | orchestrator | 2026-02-28 00:52:16 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:16.141046 | orchestrator | 2026-02-28 00:52:16 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:16.141109 | orchestrator | 2026-02-28 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:19.197809 | orchestrator | 2026-02-28 00:52:19 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:19.197938 | orchestrator | 2026-02-28 00:52:19 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:19.198320 | orchestrator | 2026-02-28 00:52:19 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:19.198351 | orchestrator | 2026-02-28 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:22.239052 | orchestrator | 2026-02-28 00:52:22 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:22.239212 | orchestrator | 2026-02-28 00:52:22 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:22.240821 | orchestrator | 2026-02-28 00:52:22 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:22.240852 | orchestrator | 2026-02-28 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:25.291727 | orchestrator | 2026-02-28 00:52:25 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:25.293882 | orchestrator | 2026-02-28 00:52:25 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:25.297536 | orchestrator | 2026-02-28 00:52:25 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:25.297589 | orchestrator | 2026-02-28 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:28.347805 | orchestrator | 2026-02-28 00:52:28 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:28.347911 | orchestrator | 2026-02-28 00:52:28 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:28.348363 | orchestrator | 2026-02-28 00:52:28 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:28.348420 | orchestrator | 2026-02-28 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:31.392670 | orchestrator | 2026-02-28 00:52:31 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:31.393183 | orchestrator | 2026-02-28 00:52:31 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:31.394154 | orchestrator | 2026-02-28 00:52:31 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:31.394236 | orchestrator | 2026-02-28 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:34.429508 | orchestrator | 2026-02-28 00:52:34 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:34.432814 | orchestrator | 2026-02-28 00:52:34 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:34.436699 | orchestrator | 2026-02-28 00:52:34 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:34.436765 | orchestrator | 2026-02-28 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:37.507434 | orchestrator | 2026-02-28 00:52:37 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:37.507544 | orchestrator | 2026-02-28 00:52:37 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:37.507566 | orchestrator | 2026-02-28 00:52:37 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:37.507585 | orchestrator | 2026-02-28 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:40.537851 | orchestrator | 2026-02-28 00:52:40 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:40.540158 | orchestrator | 2026-02-28 00:52:40 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:40.542224 | orchestrator | 2026-02-28 00:52:40 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:40.542250 | orchestrator | 2026-02-28 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:43.589236 | orchestrator | 2026-02-28 00:52:43 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:43.589959 | orchestrator | 2026-02-28 00:52:43 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:43.590950 | orchestrator | 2026-02-28 00:52:43 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:43.590995 | orchestrator | 2026-02-28 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:46.641177 | orchestrator | 2026-02-28 00:52:46 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:46.643025 | orchestrator | 2026-02-28 00:52:46 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:46.644555 | orchestrator | 2026-02-28 00:52:46 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:46.645121 | orchestrator | 2026-02-28 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:49.673962 | orchestrator | 2026-02-28 00:52:49 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:49.675413 | orchestrator | 2026-02-28 00:52:49 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:49.677410 | orchestrator | 2026-02-28 00:52:49 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:49.677492 | orchestrator | 2026-02-28 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:52.732208 | orchestrator | 2026-02-28 00:52:52 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:52.733076 | orchestrator | 2026-02-28 00:52:52 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:52.735762 | orchestrator | 2026-02-28 00:52:52 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:52.736181 | orchestrator | 2026-02-28 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:55.777252 | orchestrator | 2026-02-28 00:52:55 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:55.781530 | orchestrator | 2026-02-28 00:52:55 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:55.782217 | orchestrator | 2026-02-28 00:52:55 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:55.782260 | orchestrator | 2026-02-28 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:58.821493 | orchestrator | 2026-02-28 00:52:58 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:52:58.823264 | orchestrator | 2026-02-28 00:52:58 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:52:58.824788 | orchestrator | 2026-02-28 00:52:58 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:52:58.825189 | orchestrator | 2026-02-28 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:01.867533 | orchestrator | 2026-02-28 00:53:01 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:53:01.868091 | orchestrator | 2026-02-28 00:53:01 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:01.871191 | orchestrator | 2026-02-28 00:53:01 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:01.871751 | orchestrator | 2026-02-28 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:04.931534 | orchestrator | 2026-02-28 00:53:04 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:53:04.933810 | orchestrator | 2026-02-28 00:53:04 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:04.938146 | orchestrator | 2026-02-28 00:53:04 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:04.938198 | orchestrator | 2026-02-28 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:07.987261 | orchestrator | 2026-02-28 00:53:07 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state STARTED 2026-02-28 00:53:07.988763 | orchestrator | 2026-02-28 00:53:07 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:07.990616 | orchestrator | 2026-02-28 00:53:07 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:07.990658 | orchestrator | 2026-02-28 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:11.047925 | orchestrator | 2026-02-28 00:53:11 | INFO  | Task cd5dbe01-c92d-4b6c-899d-1dcb351f07bc is in state SUCCESS 2026-02-28 00:53:11.049513 | orchestrator | 2026-02-28 00:53:11.049551 | orchestrator | 2026-02-28 00:53:11.049561 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:53:11.049573 | orchestrator | 2026-02-28 00:53:11.049592 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:53:11.049610 | orchestrator | Saturday 28 February 2026 00:50:28 +0000 (0:00:00.184) 0:00:00.184 ***** 2026-02-28 00:53:11.049623 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:53:11.049660 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:53:11.049674 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:53:11.049689 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.049703 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.049716 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.049730 | orchestrator | 2026-02-28 00:53:11.049739 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:53:11.049747 | orchestrator | Saturday 28 February 2026 00:50:29 +0000 (0:00:00.743) 0:00:00.927 ***** 2026-02-28 00:53:11.049755 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-28 00:53:11.049763 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-28 00:53:11.049771 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-28 00:53:11.049779 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-28 00:53:11.049786 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-28 00:53:11.049794 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-28 00:53:11.049802 | orchestrator | 2026-02-28 00:53:11.049820 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-28 00:53:11.049829 | orchestrator | 2026-02-28 00:53:11.049837 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-28 00:53:11.049845 | orchestrator | Saturday 28 February 2026 00:50:30 +0000 (0:00:00.878) 0:00:01.806 ***** 2026-02-28 00:53:11.049854 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:53:11.049862 | orchestrator | 2026-02-28 00:53:11.049870 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-28 00:53:11.049878 | orchestrator | Saturday 28 February 2026 00:50:31 +0000 (0:00:01.348) 0:00:03.155 ***** 2026-02-28 00:53:11.049888 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.049898 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.049906 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.049914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.049922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050052 | orchestrator | 2026-02-28 00:53:11.050062 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-28 00:53:11.050070 | orchestrator | Saturday 28 February 2026 00:50:32 +0000 (0:00:01.441) 0:00:04.596 ***** 2026-02-28 00:53:11.050078 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050138 | orchestrator | 2026-02-28 00:53:11.050148 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-28 00:53:11.050157 | orchestrator | Saturday 28 February 2026 00:50:34 +0000 (0:00:01.943) 0:00:06.540 ***** 2026-02-28 00:53:11.050166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050260 | orchestrator | 2026-02-28 00:53:11.050276 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-28 00:53:11.050289 | orchestrator | Saturday 28 February 2026 00:50:36 +0000 (0:00:01.417) 0:00:07.957 ***** 2026-02-28 00:53:11.050302 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050315 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050329 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050485 | orchestrator | 2026-02-28 00:53:11.050500 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-28 00:53:11.050514 | orchestrator | Saturday 28 February 2026 00:50:38 +0000 (0:00:02.048) 0:00:10.005 ***** 2026-02-28 00:53:11.050529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050561 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.050600 | orchestrator | 2026-02-28 00:53:11.050608 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-28 00:53:11.050616 | orchestrator | Saturday 28 February 2026 00:50:39 +0000 (0:00:01.663) 0:00:11.669 ***** 2026-02-28 00:53:11.050628 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:53:11.050643 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:53:11.050663 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.050678 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:53:11.050691 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.050704 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.050716 | orchestrator | 2026-02-28 00:53:11.050729 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-28 00:53:11.050742 | orchestrator | Saturday 28 February 2026 00:50:42 +0000 (0:00:02.943) 0:00:14.612 ***** 2026-02-28 00:53:11.050755 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-28 00:53:11.050768 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-28 00:53:11.050781 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-28 00:53:11.050802 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:53:11.050817 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-28 00:53:11.050830 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-28 00:53:11.050841 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-28 00:53:11.050854 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:53:11.050866 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:53:11.050879 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:53:11.050893 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-28 00:53:11.050907 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:53:11.050921 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:53:11.050943 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-28 00:53:11.050958 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-28 00:53:11.050973 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-28 00:53:11.050988 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:53:11.051004 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-28 00:53:11.051030 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-28 00:53:11.051046 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:53:11.051059 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:53:11.051073 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:53:11.051088 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:53:11.051102 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:53:11.051115 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:53:11.051129 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:53:11.051142 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:53:11.051156 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:53:11.051171 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:53:11.051185 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:53:11.051199 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:53:11.051213 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:53:11.051284 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:53:11.051302 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:53:11.051316 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-28 00:53:11.051331 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:53:11.051365 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-28 00:53:11.051379 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:53:11.051392 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-28 00:53:11.051406 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-28 00:53:11.051430 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-28 00:53:11.051445 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-28 00:53:11.051459 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-28 00:53:11.051473 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-28 00:53:11.051487 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-28 00:53:11.051502 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-28 00:53:11.051517 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-28 00:53:11.051542 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-28 00:53:11.051563 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-28 00:53:11.051578 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-28 00:53:11.051591 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-28 00:53:11.051605 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-28 00:53:11.051619 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-28 00:53:11.051633 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-28 00:53:11.051647 | orchestrator | 2026-02-28 00:53:11.051662 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:53:11.051676 | orchestrator | Saturday 28 February 2026 00:51:08 +0000 (0:00:25.358) 0:00:39.970 ***** 2026-02-28 00:53:11.051690 | orchestrator | 2026-02-28 00:53:11.051704 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:53:11.051717 | orchestrator | Saturday 28 February 2026 00:51:08 +0000 (0:00:00.081) 0:00:40.052 ***** 2026-02-28 00:53:11.051730 | orchestrator | 2026-02-28 00:53:11.051743 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:53:11.051756 | orchestrator | Saturday 28 February 2026 00:51:08 +0000 (0:00:00.075) 0:00:40.127 ***** 2026-02-28 00:53:11.051769 | orchestrator | 2026-02-28 00:53:11.051783 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:53:11.051797 | orchestrator | Saturday 28 February 2026 00:51:08 +0000 (0:00:00.070) 0:00:40.198 ***** 2026-02-28 00:53:11.051811 | orchestrator | 2026-02-28 00:53:11.051825 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:53:11.051840 | orchestrator | Saturday 28 February 2026 00:51:08 +0000 (0:00:00.077) 0:00:40.275 ***** 2026-02-28 00:53:11.051855 | orchestrator | 2026-02-28 00:53:11.051869 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:53:11.051883 | orchestrator | Saturday 28 February 2026 00:51:08 +0000 (0:00:00.072) 0:00:40.348 ***** 2026-02-28 00:53:11.051897 | orchestrator | 2026-02-28 00:53:11.051911 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-28 00:53:11.051925 | orchestrator | Saturday 28 February 2026 00:51:08 +0000 (0:00:00.177) 0:00:40.525 ***** 2026-02-28 00:53:11.051939 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.051952 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:53:11.051966 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:53:11.051980 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.051994 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:53:11.052007 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.052021 | orchestrator | 2026-02-28 00:53:11.052035 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-28 00:53:11.052049 | orchestrator | Saturday 28 February 2026 00:51:11 +0000 (0:00:03.102) 0:00:43.628 ***** 2026-02-28 00:53:11.052064 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.052078 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:53:11.052092 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.052106 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.052120 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:53:11.052133 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:53:11.052147 | orchestrator | 2026-02-28 00:53:11.052160 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-28 00:53:11.052175 | orchestrator | 2026-02-28 00:53:11.052201 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-28 00:53:11.052217 | orchestrator | Saturday 28 February 2026 00:51:43 +0000 (0:00:31.343) 0:01:14.972 ***** 2026-02-28 00:53:11.052231 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:53:11.052246 | orchestrator | 2026-02-28 00:53:11.052261 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-28 00:53:11.052276 | orchestrator | Saturday 28 February 2026 00:51:43 +0000 (0:00:00.770) 0:01:15.743 ***** 2026-02-28 00:53:11.052291 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:53:11.052305 | orchestrator | 2026-02-28 00:53:11.052329 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-28 00:53:11.052365 | orchestrator | Saturday 28 February 2026 00:51:44 +0000 (0:00:00.572) 0:01:16.315 ***** 2026-02-28 00:53:11.052381 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.052395 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.052410 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.052424 | orchestrator | 2026-02-28 00:53:11.052438 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-28 00:53:11.052452 | orchestrator | Saturday 28 February 2026 00:51:45 +0000 (0:00:01.087) 0:01:17.403 ***** 2026-02-28 00:53:11.052466 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.052480 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.052494 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.052509 | orchestrator | 2026-02-28 00:53:11.052522 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-28 00:53:11.052536 | orchestrator | Saturday 28 February 2026 00:51:46 +0000 (0:00:00.379) 0:01:17.782 ***** 2026-02-28 00:53:11.052550 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.052563 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.052576 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.052591 | orchestrator | 2026-02-28 00:53:11.052605 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-28 00:53:11.052618 | orchestrator | Saturday 28 February 2026 00:51:46 +0000 (0:00:00.398) 0:01:18.181 ***** 2026-02-28 00:53:11.052632 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.052646 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.052676 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.052690 | orchestrator | 2026-02-28 00:53:11.052703 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-28 00:53:11.052718 | orchestrator | Saturday 28 February 2026 00:51:46 +0000 (0:00:00.424) 0:01:18.605 ***** 2026-02-28 00:53:11.052731 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.052744 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.052757 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.052770 | orchestrator | 2026-02-28 00:53:11.052784 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-28 00:53:11.052842 | orchestrator | Saturday 28 February 2026 00:51:47 +0000 (0:00:00.680) 0:01:19.286 ***** 2026-02-28 00:53:11.052858 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.052873 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.052887 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.052934 | orchestrator | 2026-02-28 00:53:11.052951 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-28 00:53:11.052965 | orchestrator | Saturday 28 February 2026 00:51:47 +0000 (0:00:00.379) 0:01:19.665 ***** 2026-02-28 00:53:11.052979 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.052993 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.053006 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.053020 | orchestrator | 2026-02-28 00:53:11.053033 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-28 00:53:11.053047 | orchestrator | Saturday 28 February 2026 00:51:48 +0000 (0:00:00.416) 0:01:20.081 ***** 2026-02-28 00:53:11.053060 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.053086 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.053101 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.053115 | orchestrator | 2026-02-28 00:53:11.053129 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-28 00:53:11.053144 | orchestrator | Saturday 28 February 2026 00:51:48 +0000 (0:00:00.507) 0:01:20.589 ***** 2026-02-28 00:53:11.053158 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.053172 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.053186 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.053200 | orchestrator | 2026-02-28 00:53:11.053215 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-28 00:53:11.053231 | orchestrator | Saturday 28 February 2026 00:51:49 +0000 (0:00:00.788) 0:01:21.377 ***** 2026-02-28 00:53:11.053247 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.053262 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.053278 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.053293 | orchestrator | 2026-02-28 00:53:11.053309 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-28 00:53:11.053326 | orchestrator | Saturday 28 February 2026 00:51:49 +0000 (0:00:00.326) 0:01:21.703 ***** 2026-02-28 00:53:11.053402 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.053423 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.053436 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.053450 | orchestrator | 2026-02-28 00:53:11.053466 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-28 00:53:11.053481 | orchestrator | Saturday 28 February 2026 00:51:50 +0000 (0:00:00.311) 0:01:22.015 ***** 2026-02-28 00:53:11.053494 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.053508 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.053522 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.053536 | orchestrator | 2026-02-28 00:53:11.053549 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-28 00:53:11.053563 | orchestrator | Saturday 28 February 2026 00:51:50 +0000 (0:00:00.346) 0:01:22.361 ***** 2026-02-28 00:53:11.053577 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.053591 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.053606 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.053620 | orchestrator | 2026-02-28 00:53:11.053634 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-28 00:53:11.053649 | orchestrator | Saturday 28 February 2026 00:51:51 +0000 (0:00:00.498) 0:01:22.860 ***** 2026-02-28 00:53:11.053663 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.053678 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.053692 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.053706 | orchestrator | 2026-02-28 00:53:11.053719 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-28 00:53:11.053733 | orchestrator | Saturday 28 February 2026 00:51:51 +0000 (0:00:00.301) 0:01:23.161 ***** 2026-02-28 00:53:11.053746 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.053759 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.053773 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.053785 | orchestrator | 2026-02-28 00:53:11.053813 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-28 00:53:11.053827 | orchestrator | Saturday 28 February 2026 00:51:51 +0000 (0:00:00.318) 0:01:23.480 ***** 2026-02-28 00:53:11.053842 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.053856 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.053870 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.053884 | orchestrator | 2026-02-28 00:53:11.053897 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-28 00:53:11.053911 | orchestrator | Saturday 28 February 2026 00:51:52 +0000 (0:00:00.312) 0:01:23.792 ***** 2026-02-28 00:53:11.053925 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.053953 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.053967 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.053981 | orchestrator | 2026-02-28 00:53:11.053995 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-28 00:53:11.054009 | orchestrator | Saturday 28 February 2026 00:51:52 +0000 (0:00:00.359) 0:01:24.152 ***** 2026-02-28 00:53:11.054061 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:53:11.054075 | orchestrator | 2026-02-28 00:53:11.054089 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-28 00:53:11.054105 | orchestrator | Saturday 28 February 2026 00:51:53 +0000 (0:00:00.901) 0:01:25.054 ***** 2026-02-28 00:53:11.054121 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.054145 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.054159 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.054174 | orchestrator | 2026-02-28 00:53:11.054190 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-28 00:53:11.054205 | orchestrator | Saturday 28 February 2026 00:51:53 +0000 (0:00:00.526) 0:01:25.580 ***** 2026-02-28 00:53:11.054221 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.054236 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.054251 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.054266 | orchestrator | 2026-02-28 00:53:11.054282 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-28 00:53:11.054297 | orchestrator | Saturday 28 February 2026 00:51:54 +0000 (0:00:00.650) 0:01:26.231 ***** 2026-02-28 00:53:11.054312 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.054328 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.054361 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.054376 | orchestrator | 2026-02-28 00:53:11.054389 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-28 00:53:11.054403 | orchestrator | Saturday 28 February 2026 00:51:55 +0000 (0:00:00.685) 0:01:26.916 ***** 2026-02-28 00:53:11.054417 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.054429 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.054442 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.054455 | orchestrator | 2026-02-28 00:53:11.054468 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-28 00:53:11.054482 | orchestrator | Saturday 28 February 2026 00:51:55 +0000 (0:00:00.479) 0:01:27.395 ***** 2026-02-28 00:53:11.054496 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.054509 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.054523 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.054537 | orchestrator | 2026-02-28 00:53:11.054551 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-28 00:53:11.054565 | orchestrator | Saturday 28 February 2026 00:51:56 +0000 (0:00:00.503) 0:01:27.898 ***** 2026-02-28 00:53:11.054579 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.054593 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.054654 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.054672 | orchestrator | 2026-02-28 00:53:11.054686 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-28 00:53:11.054701 | orchestrator | Saturday 28 February 2026 00:51:56 +0000 (0:00:00.487) 0:01:28.386 ***** 2026-02-28 00:53:11.054715 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.054730 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.054743 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.054757 | orchestrator | 2026-02-28 00:53:11.054770 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-28 00:53:11.054783 | orchestrator | Saturday 28 February 2026 00:51:57 +0000 (0:00:00.813) 0:01:29.199 ***** 2026-02-28 00:53:11.054797 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.054810 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.054824 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.054850 | orchestrator | 2026-02-28 00:53:11.054865 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-28 00:53:11.054879 | orchestrator | Saturday 28 February 2026 00:51:57 +0000 (0:00:00.344) 0:01:29.543 ***** 2026-02-28 00:53:11.054895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.054912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.054941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.054957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.054981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.054996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055062 | orchestrator | 2026-02-28 00:53:11.055076 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-28 00:53:11.055091 | orchestrator | Saturday 28 February 2026 00:51:59 +0000 (0:00:01.479) 0:01:31.023 ***** 2026-02-28 00:53:11.055106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055265 | orchestrator | 2026-02-28 00:53:11.055281 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-28 00:53:11.055295 | orchestrator | Saturday 28 February 2026 00:52:03 +0000 (0:00:04.190) 0:01:35.213 ***** 2026-02-28 00:53:11.055310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.055498 | orchestrator | 2026-02-28 00:53:11.055513 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:53:11.055526 | orchestrator | Saturday 28 February 2026 00:52:05 +0000 (0:00:02.399) 0:01:37.613 ***** 2026-02-28 00:53:11.055540 | orchestrator | 2026-02-28 00:53:11.055554 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:53:11.055568 | orchestrator | Saturday 28 February 2026 00:52:05 +0000 (0:00:00.064) 0:01:37.677 ***** 2026-02-28 00:53:11.055582 | orchestrator | 2026-02-28 00:53:11.055596 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:53:11.055610 | orchestrator | Saturday 28 February 2026 00:52:05 +0000 (0:00:00.075) 0:01:37.753 ***** 2026-02-28 00:53:11.055623 | orchestrator | 2026-02-28 00:53:11.055637 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-28 00:53:11.055650 | orchestrator | Saturday 28 February 2026 00:52:06 +0000 (0:00:00.070) 0:01:37.823 ***** 2026-02-28 00:53:11.055664 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.055678 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.055692 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.055705 | orchestrator | 2026-02-28 00:53:11.055719 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-28 00:53:11.055732 | orchestrator | Saturday 28 February 2026 00:52:12 +0000 (0:00:06.885) 0:01:44.709 ***** 2026-02-28 00:53:11.055745 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.055758 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.055772 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.055785 | orchestrator | 2026-02-28 00:53:11.055798 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-28 00:53:11.055812 | orchestrator | Saturday 28 February 2026 00:52:20 +0000 (0:00:07.658) 0:01:52.367 ***** 2026-02-28 00:53:11.055825 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.055839 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.055854 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.055868 | orchestrator | 2026-02-28 00:53:11.055881 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-28 00:53:11.055894 | orchestrator | Saturday 28 February 2026 00:52:28 +0000 (0:00:07.693) 0:02:00.061 ***** 2026-02-28 00:53:11.055906 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.055920 | orchestrator | 2026-02-28 00:53:11.055933 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-28 00:53:11.055946 | orchestrator | Saturday 28 February 2026 00:52:28 +0000 (0:00:00.180) 0:02:00.241 ***** 2026-02-28 00:53:11.055959 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.055973 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.055986 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.055999 | orchestrator | 2026-02-28 00:53:11.056021 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-28 00:53:11.056035 | orchestrator | Saturday 28 February 2026 00:52:29 +0000 (0:00:01.128) 0:02:01.370 ***** 2026-02-28 00:53:11.056048 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.056062 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.056076 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.056089 | orchestrator | 2026-02-28 00:53:11.056103 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-28 00:53:11.056117 | orchestrator | Saturday 28 February 2026 00:52:30 +0000 (0:00:00.837) 0:02:02.207 ***** 2026-02-28 00:53:11.056131 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.056144 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.056158 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.056171 | orchestrator | 2026-02-28 00:53:11.056193 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-28 00:53:11.056206 | orchestrator | Saturday 28 February 2026 00:52:31 +0000 (0:00:01.101) 0:02:03.309 ***** 2026-02-28 00:53:11.056220 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.056233 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.056247 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.056260 | orchestrator | 2026-02-28 00:53:11.056273 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-28 00:53:11.056288 | orchestrator | Saturday 28 February 2026 00:52:32 +0000 (0:00:00.913) 0:02:04.222 ***** 2026-02-28 00:53:11.056301 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.056315 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.056329 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.056393 | orchestrator | 2026-02-28 00:53:11.056418 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-28 00:53:11.056432 | orchestrator | Saturday 28 February 2026 00:52:33 +0000 (0:00:01.055) 0:02:05.277 ***** 2026-02-28 00:53:11.056445 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.056460 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.056474 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.056488 | orchestrator | 2026-02-28 00:53:11.056502 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-28 00:53:11.056517 | orchestrator | Saturday 28 February 2026 00:52:34 +0000 (0:00:00.966) 0:02:06.244 ***** 2026-02-28 00:53:11.056531 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.056545 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.056560 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.056574 | orchestrator | 2026-02-28 00:53:11.056588 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-28 00:53:11.056601 | orchestrator | Saturday 28 February 2026 00:52:34 +0000 (0:00:00.427) 0:02:06.671 ***** 2026-02-28 00:53:11.056616 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.056632 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.056646 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.056661 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.056677 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.056691 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.056728 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.056742 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.056756 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.056770 | orchestrator | 2026-02-28 00:53:11.056783 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-28 00:53:11.056798 | orchestrator | Saturday 28 February 2026 00:52:36 +0000 (0:00:01.751) 0:02:08.423 ***** 2026-02-28 00:53:11.056812 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.056826 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.056937 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.056956 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.056968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.056994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.057014 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.057027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.057039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.057051 | orchestrator | 2026-02-28 00:53:11.057067 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-28 00:53:11.057079 | orchestrator | Saturday 28 February 2026 00:52:41 +0000 (0:00:04.590) 0:02:13.013 ***** 2026-02-28 00:53:11.057091 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.057103 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.057116 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.057128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.057140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.057159 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.057172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.057192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.057205 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.057217 | orchestrator | 2026-02-28 00:53:11.057229 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:53:11.057241 | orchestrator | Saturday 28 February 2026 00:52:44 +0000 (0:00:03.185) 0:02:16.198 ***** 2026-02-28 00:53:11.057253 | orchestrator | 2026-02-28 00:53:11.057264 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:53:11.057276 | orchestrator | Saturday 28 February 2026 00:52:44 +0000 (0:00:00.068) 0:02:16.267 ***** 2026-02-28 00:53:11.057292 | orchestrator | 2026-02-28 00:53:11.057304 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:53:11.057316 | orchestrator | Saturday 28 February 2026 00:52:44 +0000 (0:00:00.061) 0:02:16.329 ***** 2026-02-28 00:53:11.057328 | orchestrator | 2026-02-28 00:53:11.057339 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-28 00:53:11.057369 | orchestrator | Saturday 28 February 2026 00:52:44 +0000 (0:00:00.068) 0:02:16.397 ***** 2026-02-28 00:53:11.057381 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.057394 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.057407 | orchestrator | 2026-02-28 00:53:11.057419 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-28 00:53:11.057431 | orchestrator | Saturday 28 February 2026 00:52:50 +0000 (0:00:06.149) 0:02:22.547 ***** 2026-02-28 00:53:11.057443 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.057455 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.057467 | orchestrator | 2026-02-28 00:53:11.057479 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-28 00:53:11.057491 | orchestrator | Saturday 28 February 2026 00:52:57 +0000 (0:00:06.701) 0:02:29.248 ***** 2026-02-28 00:53:11.057503 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.057515 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.057527 | orchestrator | 2026-02-28 00:53:11.057539 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-28 00:53:11.057551 | orchestrator | Saturday 28 February 2026 00:53:04 +0000 (0:00:06.655) 0:02:35.904 ***** 2026-02-28 00:53:11.057563 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.057590 | orchestrator | 2026-02-28 00:53:11.057603 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-28 00:53:11.057615 | orchestrator | Saturday 28 February 2026 00:53:04 +0000 (0:00:00.151) 0:02:36.056 ***** 2026-02-28 00:53:11.057627 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.057639 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.057651 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.057664 | orchestrator | 2026-02-28 00:53:11.057676 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-28 00:53:11.057689 | orchestrator | Saturday 28 February 2026 00:53:05 +0000 (0:00:00.794) 0:02:36.850 ***** 2026-02-28 00:53:11.057701 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.057713 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.057724 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.057735 | orchestrator | 2026-02-28 00:53:11.057746 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-28 00:53:11.057757 | orchestrator | Saturday 28 February 2026 00:53:05 +0000 (0:00:00.672) 0:02:37.523 ***** 2026-02-28 00:53:11.057768 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.057780 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.057792 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.057804 | orchestrator | 2026-02-28 00:53:11.057816 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-28 00:53:11.057828 | orchestrator | Saturday 28 February 2026 00:53:06 +0000 (0:00:00.854) 0:02:38.378 ***** 2026-02-28 00:53:11.057840 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.057852 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.057864 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.057876 | orchestrator | 2026-02-28 00:53:11.057889 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-28 00:53:11.057901 | orchestrator | Saturday 28 February 2026 00:53:07 +0000 (0:00:00.819) 0:02:39.197 ***** 2026-02-28 00:53:11.057913 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.057925 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.057937 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.057948 | orchestrator | 2026-02-28 00:53:11.057961 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-28 00:53:11.057973 | orchestrator | Saturday 28 February 2026 00:53:08 +0000 (0:00:00.941) 0:02:40.139 ***** 2026-02-28 00:53:11.057985 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.057997 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.058009 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.058110 | orchestrator | 2026-02-28 00:53:11.058123 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:53:11.058136 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-28 00:53:11.058148 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-28 00:53:11.058169 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-28 00:53:11.058182 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:53:11.058196 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:53:11.058209 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:53:11.058222 | orchestrator | 2026-02-28 00:53:11.058235 | orchestrator | 2026-02-28 00:53:11.058249 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:53:11.058273 | orchestrator | Saturday 28 February 2026 00:53:09 +0000 (0:00:01.064) 0:02:41.204 ***** 2026-02-28 00:53:11.058285 | orchestrator | =============================================================================== 2026-02-28 00:53:11.058298 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 31.34s 2026-02-28 00:53:11.058309 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 25.36s 2026-02-28 00:53:11.058326 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.36s 2026-02-28 00:53:11.058338 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.35s 2026-02-28 00:53:11.058406 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.04s 2026-02-28 00:53:11.058418 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.59s 2026-02-28 00:53:11.058429 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.19s 2026-02-28 00:53:11.058441 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.19s 2026-02-28 00:53:11.058453 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 3.10s 2026-02-28 00:53:11.058464 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.94s 2026-02-28 00:53:11.058475 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.40s 2026-02-28 00:53:11.058487 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.05s 2026-02-28 00:53:11.058499 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.94s 2026-02-28 00:53:11.058510 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.75s 2026-02-28 00:53:11.058521 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.66s 2026-02-28 00:53:11.058533 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.48s 2026-02-28 00:53:11.058544 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.44s 2026-02-28 00:53:11.058555 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.42s 2026-02-28 00:53:11.058566 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.35s 2026-02-28 00:53:11.058578 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.13s 2026-02-28 00:53:11.058597 | orchestrator | 2026-02-28 00:53:11 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:11.058610 | orchestrator | 2026-02-28 00:53:11 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:11.058621 | orchestrator | 2026-02-28 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:14.098674 | orchestrator | 2026-02-28 00:53:14 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:14.101279 | orchestrator | 2026-02-28 00:53:14 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:14.101331 | orchestrator | 2026-02-28 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:17.147649 | orchestrator | 2026-02-28 00:53:17 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:17.148709 | orchestrator | 2026-02-28 00:53:17 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:17.148877 | orchestrator | 2026-02-28 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:20.199210 | orchestrator | 2026-02-28 00:53:20 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:20.200690 | orchestrator | 2026-02-28 00:53:20 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:20.201024 | orchestrator | 2026-02-28 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:23.244756 | orchestrator | 2026-02-28 00:53:23 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:23.248248 | orchestrator | 2026-02-28 00:53:23 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:23.248312 | orchestrator | 2026-02-28 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:26.290318 | orchestrator | 2026-02-28 00:53:26 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:26.291987 | orchestrator | 2026-02-28 00:53:26 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:26.292183 | orchestrator | 2026-02-28 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:29.343209 | orchestrator | 2026-02-28 00:53:29 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:29.344320 | orchestrator | 2026-02-28 00:53:29 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:29.344404 | orchestrator | 2026-02-28 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:32.385029 | orchestrator | 2026-02-28 00:53:32 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:32.387781 | orchestrator | 2026-02-28 00:53:32 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:32.387846 | orchestrator | 2026-02-28 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:35.433700 | orchestrator | 2026-02-28 00:53:35 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:35.434588 | orchestrator | 2026-02-28 00:53:35 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:35.434609 | orchestrator | 2026-02-28 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:38.476274 | orchestrator | 2026-02-28 00:53:38 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:38.479426 | orchestrator | 2026-02-28 00:53:38 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:38.479476 | orchestrator | 2026-02-28 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:41.539430 | orchestrator | 2026-02-28 00:53:41 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:41.541624 | orchestrator | 2026-02-28 00:53:41 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:41.541674 | orchestrator | 2026-02-28 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:44.591857 | orchestrator | 2026-02-28 00:53:44 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:44.596346 | orchestrator | 2026-02-28 00:53:44 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:44.596487 | orchestrator | 2026-02-28 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:47.647881 | orchestrator | 2026-02-28 00:53:47 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:47.650622 | orchestrator | 2026-02-28 00:53:47 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:47.650719 | orchestrator | 2026-02-28 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:50.702168 | orchestrator | 2026-02-28 00:53:50 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:50.702862 | orchestrator | 2026-02-28 00:53:50 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:50.702933 | orchestrator | 2026-02-28 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:53.817650 | orchestrator | 2026-02-28 00:53:53 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:53.820066 | orchestrator | 2026-02-28 00:53:53 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:53.820103 | orchestrator | 2026-02-28 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:56.864710 | orchestrator | 2026-02-28 00:53:56 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:56.865657 | orchestrator | 2026-02-28 00:53:56 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:56.866613 | orchestrator | 2026-02-28 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:59.910941 | orchestrator | 2026-02-28 00:53:59 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:53:59.912016 | orchestrator | 2026-02-28 00:53:59 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:53:59.912083 | orchestrator | 2026-02-28 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:02.954402 | orchestrator | 2026-02-28 00:54:02 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:02.959868 | orchestrator | 2026-02-28 00:54:02 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:02.959918 | orchestrator | 2026-02-28 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:05.989155 | orchestrator | 2026-02-28 00:54:05 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:05.989931 | orchestrator | 2026-02-28 00:54:05 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:05.989976 | orchestrator | 2026-02-28 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:09.038310 | orchestrator | 2026-02-28 00:54:09 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:09.040411 | orchestrator | 2026-02-28 00:54:09 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:09.041140 | orchestrator | 2026-02-28 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:12.082182 | orchestrator | 2026-02-28 00:54:12 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:12.085127 | orchestrator | 2026-02-28 00:54:12 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:12.085207 | orchestrator | 2026-02-28 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:15.132686 | orchestrator | 2026-02-28 00:54:15 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:15.133228 | orchestrator | 2026-02-28 00:54:15 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:15.133287 | orchestrator | 2026-02-28 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:18.169195 | orchestrator | 2026-02-28 00:54:18 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:18.170930 | orchestrator | 2026-02-28 00:54:18 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:18.171082 | orchestrator | 2026-02-28 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:21.272166 | orchestrator | 2026-02-28 00:54:21 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:21.274122 | orchestrator | 2026-02-28 00:54:21 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:21.274224 | orchestrator | 2026-02-28 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:24.311330 | orchestrator | 2026-02-28 00:54:24 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:24.312305 | orchestrator | 2026-02-28 00:54:24 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:24.312694 | orchestrator | 2026-02-28 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:27.359626 | orchestrator | 2026-02-28 00:54:27 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:27.360839 | orchestrator | 2026-02-28 00:54:27 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:27.361175 | orchestrator | 2026-02-28 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:30.408125 | orchestrator | 2026-02-28 00:54:30 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:30.408381 | orchestrator | 2026-02-28 00:54:30 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:30.408651 | orchestrator | 2026-02-28 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:33.446651 | orchestrator | 2026-02-28 00:54:33 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:33.448108 | orchestrator | 2026-02-28 00:54:33 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:33.448340 | orchestrator | 2026-02-28 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:36.502302 | orchestrator | 2026-02-28 00:54:36 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:36.503536 | orchestrator | 2026-02-28 00:54:36 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:36.503639 | orchestrator | 2026-02-28 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:39.550879 | orchestrator | 2026-02-28 00:54:39 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:39.555076 | orchestrator | 2026-02-28 00:54:39 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:39.555449 | orchestrator | 2026-02-28 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:42.600337 | orchestrator | 2026-02-28 00:54:42 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:42.602110 | orchestrator | 2026-02-28 00:54:42 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:42.602150 | orchestrator | 2026-02-28 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:45.647449 | orchestrator | 2026-02-28 00:54:45 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:45.648888 | orchestrator | 2026-02-28 00:54:45 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:45.648942 | orchestrator | 2026-02-28 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:48.703571 | orchestrator | 2026-02-28 00:54:48 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:48.705904 | orchestrator | 2026-02-28 00:54:48 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:48.705966 | orchestrator | 2026-02-28 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:51.746625 | orchestrator | 2026-02-28 00:54:51 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:51.749245 | orchestrator | 2026-02-28 00:54:51 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:51.749313 | orchestrator | 2026-02-28 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:54.788298 | orchestrator | 2026-02-28 00:54:54 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:54.789341 | orchestrator | 2026-02-28 00:54:54 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:54.789376 | orchestrator | 2026-02-28 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:57.827854 | orchestrator | 2026-02-28 00:54:57 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:54:57.828993 | orchestrator | 2026-02-28 00:54:57 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:54:57.829038 | orchestrator | 2026-02-28 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:00.860675 | orchestrator | 2026-02-28 00:55:00 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:00.862167 | orchestrator | 2026-02-28 00:55:00 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:00.862327 | orchestrator | 2026-02-28 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:03.905378 | orchestrator | 2026-02-28 00:55:03 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:03.905661 | orchestrator | 2026-02-28 00:55:03 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:03.905761 | orchestrator | 2026-02-28 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:06.930220 | orchestrator | 2026-02-28 00:55:06 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:06.930926 | orchestrator | 2026-02-28 00:55:06 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:06.930946 | orchestrator | 2026-02-28 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:09.969035 | orchestrator | 2026-02-28 00:55:09 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:09.971262 | orchestrator | 2026-02-28 00:55:09 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:09.971308 | orchestrator | 2026-02-28 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:13.021763 | orchestrator | 2026-02-28 00:55:13 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:13.024216 | orchestrator | 2026-02-28 00:55:13 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:13.024479 | orchestrator | 2026-02-28 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:16.074694 | orchestrator | 2026-02-28 00:55:16 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:16.074860 | orchestrator | 2026-02-28 00:55:16 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:16.074873 | orchestrator | 2026-02-28 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:19.114875 | orchestrator | 2026-02-28 00:55:19 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:19.115065 | orchestrator | 2026-02-28 00:55:19 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:19.115590 | orchestrator | 2026-02-28 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:22.152878 | orchestrator | 2026-02-28 00:55:22 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:22.154699 | orchestrator | 2026-02-28 00:55:22 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:22.154750 | orchestrator | 2026-02-28 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:25.202604 | orchestrator | 2026-02-28 00:55:25 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:25.203102 | orchestrator | 2026-02-28 00:55:25 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:25.203141 | orchestrator | 2026-02-28 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:28.248716 | orchestrator | 2026-02-28 00:55:28 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:28.252252 | orchestrator | 2026-02-28 00:55:28 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:28.253720 | orchestrator | 2026-02-28 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:31.300647 | orchestrator | 2026-02-28 00:55:31 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:31.302559 | orchestrator | 2026-02-28 00:55:31 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:31.303153 | orchestrator | 2026-02-28 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:34.359350 | orchestrator | 2026-02-28 00:55:34 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:34.361250 | orchestrator | 2026-02-28 00:55:34 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:34.361296 | orchestrator | 2026-02-28 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:37.405883 | orchestrator | 2026-02-28 00:55:37 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:37.408031 | orchestrator | 2026-02-28 00:55:37 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:37.408078 | orchestrator | 2026-02-28 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:40.458437 | orchestrator | 2026-02-28 00:55:40 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:40.459251 | orchestrator | 2026-02-28 00:55:40 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:40.459286 | orchestrator | 2026-02-28 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:43.498469 | orchestrator | 2026-02-28 00:55:43 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:43.500248 | orchestrator | 2026-02-28 00:55:43 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:43.500301 | orchestrator | 2026-02-28 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:46.537456 | orchestrator | 2026-02-28 00:55:46 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:46.538946 | orchestrator | 2026-02-28 00:55:46 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:46.538993 | orchestrator | 2026-02-28 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:49.589880 | orchestrator | 2026-02-28 00:55:49 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:49.591181 | orchestrator | 2026-02-28 00:55:49 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:49.592281 | orchestrator | 2026-02-28 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:52.633639 | orchestrator | 2026-02-28 00:55:52 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:52.635854 | orchestrator | 2026-02-28 00:55:52 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:52.635919 | orchestrator | 2026-02-28 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:55.679596 | orchestrator | 2026-02-28 00:55:55 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:55.680340 | orchestrator | 2026-02-28 00:55:55 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:55.680406 | orchestrator | 2026-02-28 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:58.721952 | orchestrator | 2026-02-28 00:55:58 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:55:58.722861 | orchestrator | 2026-02-28 00:55:58 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:55:58.723368 | orchestrator | 2026-02-28 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:01.773685 | orchestrator | 2026-02-28 00:56:01 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:01.774733 | orchestrator | 2026-02-28 00:56:01 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:56:01.774823 | orchestrator | 2026-02-28 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:04.812112 | orchestrator | 2026-02-28 00:56:04 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:04.815662 | orchestrator | 2026-02-28 00:56:04 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:56:04.815726 | orchestrator | 2026-02-28 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:07.861019 | orchestrator | 2026-02-28 00:56:07 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:07.861954 | orchestrator | 2026-02-28 00:56:07 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:56:07.862063 | orchestrator | 2026-02-28 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:10.913114 | orchestrator | 2026-02-28 00:56:10 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:10.913699 | orchestrator | 2026-02-28 00:56:10 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:56:10.913942 | orchestrator | 2026-02-28 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:13.955028 | orchestrator | 2026-02-28 00:56:13 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:13.957957 | orchestrator | 2026-02-28 00:56:13 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:56:13.958071 | orchestrator | 2026-02-28 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:16.996687 | orchestrator | 2026-02-28 00:56:16 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:16.997813 | orchestrator | 2026-02-28 00:56:16 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:56:16.997849 | orchestrator | 2026-02-28 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:20.067793 | orchestrator | 2026-02-28 00:56:20 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:20.068199 | orchestrator | 2026-02-28 00:56:20 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:56:20.070790 | orchestrator | 2026-02-28 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:23.115503 | orchestrator | 2026-02-28 00:56:23 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:23.118087 | orchestrator | 2026-02-28 00:56:23 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state STARTED 2026-02-28 00:56:23.118146 | orchestrator | 2026-02-28 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:26.160042 | orchestrator | 2026-02-28 00:56:26 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:56:26.160206 | orchestrator | 2026-02-28 00:56:26 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:26.168664 | orchestrator | 2026-02-28 00:56:26 | INFO  | Task adae56f0-6538-4972-8e82-fca43d69a754 is in state SUCCESS 2026-02-28 00:56:26.172233 | orchestrator | 2026-02-28 00:56:26.172304 | orchestrator | 2026-02-28 00:56:26.172311 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:56:26.172318 | orchestrator | 2026-02-28 00:56:26.172323 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:56:26.172329 | orchestrator | Saturday 28 February 2026 00:49:06 +0000 (0:00:00.393) 0:00:00.393 ***** 2026-02-28 00:56:26.172335 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.172341 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.172346 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.172351 | orchestrator | 2026-02-28 00:56:26.172356 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:56:26.172361 | orchestrator | Saturday 28 February 2026 00:49:06 +0000 (0:00:00.558) 0:00:00.951 ***** 2026-02-28 00:56:26.172366 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-28 00:56:26.172371 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-28 00:56:26.172376 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-28 00:56:26.172381 | orchestrator | 2026-02-28 00:56:26.172386 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-28 00:56:26.172391 | orchestrator | 2026-02-28 00:56:26.172396 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-28 00:56:26.172401 | orchestrator | Saturday 28 February 2026 00:49:07 +0000 (0:00:00.886) 0:00:01.838 ***** 2026-02-28 00:56:26.172406 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.172411 | orchestrator | 2026-02-28 00:56:26.172416 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-28 00:56:26.172421 | orchestrator | Saturday 28 February 2026 00:49:08 +0000 (0:00:01.164) 0:00:03.003 ***** 2026-02-28 00:56:26.172426 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.172444 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.172450 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.172454 | orchestrator | 2026-02-28 00:56:26.172459 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-28 00:56:26.172464 | orchestrator | Saturday 28 February 2026 00:49:10 +0000 (0:00:01.897) 0:00:04.900 ***** 2026-02-28 00:56:26.172469 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.172474 | orchestrator | 2026-02-28 00:56:26.172479 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-28 00:56:26.172484 | orchestrator | Saturday 28 February 2026 00:49:12 +0000 (0:00:01.260) 0:00:06.161 ***** 2026-02-28 00:56:26.172489 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.172494 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.172499 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.172520 | orchestrator | 2026-02-28 00:56:26.172525 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-28 00:56:26.172530 | orchestrator | Saturday 28 February 2026 00:49:14 +0000 (0:00:02.292) 0:00:08.453 ***** 2026-02-28 00:56:26.172535 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:56:26.172592 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:56:26.172601 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:56:26.172609 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:56:26.172616 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:56:26.172622 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-28 00:56:26.172630 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-28 00:56:26.172637 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-28 00:56:26.172644 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-28 00:56:26.172651 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:56:26.172657 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-28 00:56:26.172664 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-28 00:56:26.172672 | orchestrator | 2026-02-28 00:56:26.172679 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-28 00:56:26.172685 | orchestrator | Saturday 28 February 2026 00:49:18 +0000 (0:00:04.520) 0:00:12.973 ***** 2026-02-28 00:56:26.172690 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-28 00:56:26.172723 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-28 00:56:26.172730 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-28 00:56:26.172734 | orchestrator | 2026-02-28 00:56:26.172739 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-28 00:56:26.172744 | orchestrator | Saturday 28 February 2026 00:49:20 +0000 (0:00:01.795) 0:00:14.769 ***** 2026-02-28 00:56:26.172748 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-28 00:56:26.172753 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-28 00:56:26.172758 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-28 00:56:26.172762 | orchestrator | 2026-02-28 00:56:26.172767 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-28 00:56:26.172771 | orchestrator | Saturday 28 February 2026 00:49:24 +0000 (0:00:03.606) 0:00:18.375 ***** 2026-02-28 00:56:26.172776 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-28 00:56:26.172781 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.172798 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-28 00:56:26.172803 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.172808 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-28 00:56:26.172812 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.172817 | orchestrator | 2026-02-28 00:56:26.172821 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-28 00:56:26.172826 | orchestrator | Saturday 28 February 2026 00:49:25 +0000 (0:00:01.206) 0:00:19.581 ***** 2026-02-28 00:56:26.172833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:26.172856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:26.172862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:26.172867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:26.172872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:26.172881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:26.172886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:26.172895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:26.172903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:26.172908 | orchestrator | 2026-02-28 00:56:26.172913 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-28 00:56:26.172917 | orchestrator | Saturday 28 February 2026 00:49:27 +0000 (0:00:02.520) 0:00:22.102 ***** 2026-02-28 00:56:26.172922 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.172927 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.172931 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.172936 | orchestrator | 2026-02-28 00:56:26.172941 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-28 00:56:26.172945 | orchestrator | Saturday 28 February 2026 00:49:29 +0000 (0:00:01.985) 0:00:24.087 ***** 2026-02-28 00:56:26.172950 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-28 00:56:26.172954 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-28 00:56:26.172978 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-28 00:56:26.172983 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-28 00:56:26.172987 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-28 00:56:26.172992 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-28 00:56:26.172996 | orchestrator | 2026-02-28 00:56:26.173001 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-28 00:56:26.173006 | orchestrator | Saturday 28 February 2026 00:49:32 +0000 (0:00:02.784) 0:00:26.872 ***** 2026-02-28 00:56:26.173010 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.173015 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.173019 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.173024 | orchestrator | 2026-02-28 00:56:26.173029 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-28 00:56:26.173033 | orchestrator | Saturday 28 February 2026 00:49:34 +0000 (0:00:01.620) 0:00:28.493 ***** 2026-02-28 00:56:26.173038 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.173042 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.173047 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.173052 | orchestrator | 2026-02-28 00:56:26.173056 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-28 00:56:26.173061 | orchestrator | Saturday 28 February 2026 00:49:37 +0000 (0:00:03.233) 0:00:31.726 ***** 2026-02-28 00:56:26.173066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.173079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.173084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.173093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__547b277dd5273390dff060ecfa84d6426c7f322c', '__omit_place_holder__547b277dd5273390dff060ecfa84d6426c7f322c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:56:26.173099 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.173104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.173109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.173114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.173125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.173130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.173196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__547b277dd5273390dff060ecfa84d6426c7f322c', '__omit_place_holder__547b277dd5273390dff060ecfa84d6426c7f322c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:56:26.173202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.173207 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.173212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__547b277dd5273390dff060ecfa84d6426c7f322c', '__omit_place_holder__547b277dd5273390dff060ecfa84d6426c7f322c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:56:26.173217 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.173221 | orchestrator | 2026-02-28 00:56:26.173226 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-28 00:56:26.173231 | orchestrator | Saturday 28 February 2026 00:49:38 +0000 (0:00:01.062) 0:00:32.789 ***** 2026-02-28 00:56:26.173236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.173271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__547b277dd5273390dff060ecfa84d6426c7f322c', '__omit_place_holder__547b277dd5273390dff060ecfa84d6426c7f322c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:56:26.173276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.173293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__547b277dd5273390dff060ecfa84d6426c7f322c', '__omit_place_holder__547b277dd5273390dff060ecfa84d6426c7f322c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:56:26.173298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.173311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__547b277dd5273390dff060ecfa84d6426c7f322c', '__omit_place_holder__547b277dd5273390dff060ecfa84d6426c7f322c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:56:26.173316 | orchestrator | 2026-02-28 00:56:26.173320 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-28 00:56:26.173325 | orchestrator | Saturday 28 February 2026 00:49:42 +0000 (0:00:03.559) 0:00:36.348 ***** 2026-02-28 00:56:26.173330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:26.173384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:26.173389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:26.173394 | orchestrator | 2026-02-28 00:56:26.173399 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-28 00:56:26.173403 | orchestrator | Saturday 28 February 2026 00:49:46 +0000 (0:00:04.179) 0:00:40.528 ***** 2026-02-28 00:56:26.173408 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-28 00:56:26.173631 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-28 00:56:26.173643 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-28 00:56:26.173648 | orchestrator | 2026-02-28 00:56:26.173653 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-28 00:56:26.173657 | orchestrator | Saturday 28 February 2026 00:49:51 +0000 (0:00:04.953) 0:00:45.481 ***** 2026-02-28 00:56:26.173662 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-28 00:56:26.173667 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-28 00:56:26.173722 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-28 00:56:26.173728 | orchestrator | 2026-02-28 00:56:26.173733 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-28 00:56:26.173737 | orchestrator | Saturday 28 February 2026 00:49:55 +0000 (0:00:04.575) 0:00:50.057 ***** 2026-02-28 00:56:26.173742 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.173747 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.173751 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.173756 | orchestrator | 2026-02-28 00:56:26.173760 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-28 00:56:26.173765 | orchestrator | Saturday 28 February 2026 00:49:56 +0000 (0:00:00.955) 0:00:51.012 ***** 2026-02-28 00:56:26.173770 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-28 00:56:26.173776 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-28 00:56:26.173780 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-28 00:56:26.173785 | orchestrator | 2026-02-28 00:56:26.173790 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-28 00:56:26.173794 | orchestrator | Saturday 28 February 2026 00:50:02 +0000 (0:00:05.244) 0:00:56.257 ***** 2026-02-28 00:56:26.173820 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-28 00:56:26.173825 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-28 00:56:26.173830 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-28 00:56:26.173834 | orchestrator | 2026-02-28 00:56:26.173839 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-28 00:56:26.173844 | orchestrator | Saturday 28 February 2026 00:50:05 +0000 (0:00:03.506) 0:00:59.763 ***** 2026-02-28 00:56:26.173848 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-28 00:56:26.173853 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-28 00:56:26.173857 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-28 00:56:26.173862 | orchestrator | 2026-02-28 00:56:26.173867 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-28 00:56:26.173894 | orchestrator | Saturday 28 February 2026 00:50:07 +0000 (0:00:01.892) 0:01:01.656 ***** 2026-02-28 00:56:26.173899 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-28 00:56:26.173904 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-28 00:56:26.173909 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-28 00:56:26.173913 | orchestrator | 2026-02-28 00:56:26.173918 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-28 00:56:26.173922 | orchestrator | Saturday 28 February 2026 00:50:09 +0000 (0:00:01.661) 0:01:03.317 ***** 2026-02-28 00:56:26.173927 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.173931 | orchestrator | 2026-02-28 00:56:26.173936 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-28 00:56:26.173941 | orchestrator | Saturday 28 February 2026 00:50:10 +0000 (0:00:01.355) 0:01:04.673 ***** 2026-02-28 00:56:26.173945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:26.173990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:26.173995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:26.174003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:26.174008 | orchestrator | 2026-02-28 00:56:26.174072 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-28 00:56:26.174086 | orchestrator | Saturday 28 February 2026 00:50:14 +0000 (0:00:04.016) 0:01:08.690 ***** 2026-02-28 00:56:26.174093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.174115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.174123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.174130 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.174138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.174145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.174158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.174166 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.174172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.174190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.174198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.174206 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.174214 | orchestrator | 2026-02-28 00:56:26.174221 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-28 00:56:26.174229 | orchestrator | Saturday 28 February 2026 00:50:16 +0000 (0:00:01.920) 0:01:10.610 ***** 2026-02-28 00:56:26.174236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.174244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.174256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.174265 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.174273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.174296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.174305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.174313 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.174320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.174327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.174335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.174341 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.174348 | orchestrator | 2026-02-28 00:56:26.174356 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-28 00:56:26.174363 | orchestrator | Saturday 28 February 2026 00:50:17 +0000 (0:00:00.903) 0:01:11.514 ***** 2026-02-28 00:56:26.174380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.174390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.174402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.174410 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.174419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.174427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.174436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.174444 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.174458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.174492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.174507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.174516 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.174524 | orchestrator | 2026-02-28 00:56:26.174533 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-28 00:56:26.174539 | orchestrator | Saturday 28 February 2026 00:50:18 +0000 (0:00:00.980) 0:01:12.494 ***** 2026-02-28 00:56:26.174566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.174572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.174578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.174589 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.174594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.174606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.174612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.174619 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.174632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.174639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.174647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.174654 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.174661 | orchestrator | 2026-02-28 00:56:26.174674 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-28 00:56:26.174681 | orchestrator | Saturday 28 February 2026 00:50:18 +0000 (0:00:00.639) 0:01:13.133 ***** 2026-02-28 00:56:26.174689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.174798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.174871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.174882 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.174899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.174907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.174914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.174936 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.174942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.174960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.174967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.174974 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.174981 | orchestrator | 2026-02-28 00:56:26.174988 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-28 00:56:26.174995 | orchestrator | Saturday 28 February 2026 00:50:19 +0000 (0:00:00.887) 0:01:14.021 ***** 2026-02-28 00:56:26.175010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.175018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.175025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.175037 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.175041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.175045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.175055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.175060 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.175064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.175071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.175075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.175082 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.175086 | orchestrator | 2026-02-28 00:56:26.175091 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-28 00:56:26.175095 | orchestrator | Saturday 28 February 2026 00:50:20 +0000 (0:00:00.810) 0:01:14.831 ***** 2026-02-28 00:56:26.175099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.175104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.175112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.175116 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.175121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.175127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.175131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.175139 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.175143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.175147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.175151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.175155 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.175159 | orchestrator | 2026-02-28 00:56:26.175163 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-28 00:56:26.175170 | orchestrator | Saturday 28 February 2026 00:50:21 +0000 (0:00:00.691) 0:01:15.523 ***** 2026-02-28 00:56:26.175174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.175180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.175185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.175192 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.175196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.175200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.175205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.175212 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.175221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:26.175226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:26.175232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:26.175239 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.175243 | orchestrator | 2026-02-28 00:56:26.175247 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-28 00:56:26.175251 | orchestrator | Saturday 28 February 2026 00:50:22 +0000 (0:00:00.765) 0:01:16.289 ***** 2026-02-28 00:56:26.175255 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-28 00:56:26.175260 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-28 00:56:26.175264 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-28 00:56:26.175268 | orchestrator | 2026-02-28 00:56:26.175272 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-28 00:56:26.175276 | orchestrator | Saturday 28 February 2026 00:50:23 +0000 (0:00:01.625) 0:01:17.915 ***** 2026-02-28 00:56:26.175280 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-28 00:56:26.175284 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-28 00:56:26.175288 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-28 00:56:26.175292 | orchestrator | 2026-02-28 00:56:26.175296 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-28 00:56:26.175300 | orchestrator | Saturday 28 February 2026 00:50:25 +0000 (0:00:01.873) 0:01:19.788 ***** 2026-02-28 00:56:26.175304 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-28 00:56:26.175308 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-28 00:56:26.175312 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.175317 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-28 00:56:26.175321 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-28 00:56:26.175325 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-28 00:56:26.175329 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.175333 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-28 00:56:26.175337 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.175341 | orchestrator | 2026-02-28 00:56:26.175345 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-28 00:56:26.175350 | orchestrator | Saturday 28 February 2026 00:50:26 +0000 (0:00:01.096) 0:01:20.884 ***** 2026-02-28 00:56:26.175359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:26.175365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:26.175375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:26.175380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:26.175385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:26.175390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:26.175395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:26.175404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:26.175412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:26.175416 | orchestrator | 2026-02-28 00:56:26.175421 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-28 00:56:26.175425 | orchestrator | Saturday 28 February 2026 00:50:29 +0000 (0:00:02.897) 0:01:23.781 ***** 2026-02-28 00:56:26.175430 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.175434 | orchestrator | 2026-02-28 00:56:26.175439 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-28 00:56:26.175445 | orchestrator | Saturday 28 February 2026 00:50:30 +0000 (0:00:00.583) 0:01:24.365 ***** 2026-02-28 00:56:26.175452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-28 00:56:26.175459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.175464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.175469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-28 00:56:26.178250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.178262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-28 00:56:26.178284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.178308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178324 | orchestrator | 2026-02-28 00:56:26.178332 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-28 00:56:26.178341 | orchestrator | Saturday 28 February 2026 00:50:34 +0000 (0:00:04.569) 0:01:28.934 ***** 2026-02-28 00:56:26.178354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-28 00:56:26.178362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.178369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178388 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.178403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-28 00:56:26.178415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-28 00:56:26.178422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.178429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.178441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178466 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.178474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178480 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.178488 | orchestrator | 2026-02-28 00:56:26.178495 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-28 00:56:26.178503 | orchestrator | Saturday 28 February 2026 00:50:36 +0000 (0:00:01.240) 0:01:30.175 ***** 2026-02-28 00:56:26.178511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-28 00:56:26.178519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-28 00:56:26.178527 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.178534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-28 00:56:26.178540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-28 00:56:26.178568 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.178575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-28 00:56:26.178581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-28 00:56:26.178587 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.178593 | orchestrator | 2026-02-28 00:56:26.178600 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-28 00:56:26.178631 | orchestrator | Saturday 28 February 2026 00:50:37 +0000 (0:00:01.518) 0:01:31.693 ***** 2026-02-28 00:56:26.178639 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.178645 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.178652 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.178664 | orchestrator | 2026-02-28 00:56:26.178671 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-28 00:56:26.178678 | orchestrator | Saturday 28 February 2026 00:50:38 +0000 (0:00:01.431) 0:01:33.124 ***** 2026-02-28 00:56:26.178685 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.178691 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.178696 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.178702 | orchestrator | 2026-02-28 00:56:26.178707 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-28 00:56:26.178713 | orchestrator | Saturday 28 February 2026 00:50:41 +0000 (0:00:02.243) 0:01:35.368 ***** 2026-02-28 00:56:26.178719 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.178725 | orchestrator | 2026-02-28 00:56:26.178730 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-28 00:56:26.178738 | orchestrator | Saturday 28 February 2026 00:50:42 +0000 (0:00:00.937) 0:01:36.305 ***** 2026-02-28 00:56:26.178752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.178759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.178791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.178819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178833 | orchestrator | 2026-02-28 00:56:26.178839 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-28 00:56:26.178844 | orchestrator | Saturday 28 February 2026 00:50:49 +0000 (0:00:07.050) 0:01:43.355 ***** 2026-02-28 00:56:26.178855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.178861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178878 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.178884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.178894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.178916 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.178926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.178939 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.178946 | orchestrator | 2026-02-28 00:56:26.178951 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-28 00:56:26.178958 | orchestrator | Saturday 28 February 2026 00:50:49 +0000 (0:00:00.669) 0:01:44.025 ***** 2026-02-28 00:56:26.178965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-28 00:56:26.178976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-28 00:56:26.178983 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.178990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-28 00:56:26.179004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-28 00:56:26.179011 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.179017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-28 00:56:26.179023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-28 00:56:26.179030 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.179036 | orchestrator | 2026-02-28 00:56:26.179042 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-28 00:56:26.179048 | orchestrator | Saturday 28 February 2026 00:50:51 +0000 (0:00:01.417) 0:01:45.442 ***** 2026-02-28 00:56:26.179054 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.179060 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.179065 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.179072 | orchestrator | 2026-02-28 00:56:26.179078 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-28 00:56:26.179083 | orchestrator | Saturday 28 February 2026 00:50:52 +0000 (0:00:01.546) 0:01:46.989 ***** 2026-02-28 00:56:26.179089 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.179095 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.179101 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.179107 | orchestrator | 2026-02-28 00:56:26.179113 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-28 00:56:26.179120 | orchestrator | Saturday 28 February 2026 00:50:55 +0000 (0:00:02.355) 0:01:49.345 ***** 2026-02-28 00:56:26.179126 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.179131 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.179137 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.179143 | orchestrator | 2026-02-28 00:56:26.179150 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-28 00:56:26.179155 | orchestrator | Saturday 28 February 2026 00:50:55 +0000 (0:00:00.348) 0:01:49.694 ***** 2026-02-28 00:56:26.179161 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.179167 | orchestrator | 2026-02-28 00:56:26.179172 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-28 00:56:26.179177 | orchestrator | Saturday 28 February 2026 00:50:56 +0000 (0:00:00.959) 0:01:50.653 ***** 2026-02-28 00:56:26.179190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-28 00:56:26.179197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-28 00:56:26.179212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-28 00:56:26.179218 | orchestrator | 2026-02-28 00:56:26.179224 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-28 00:56:26.179230 | orchestrator | Saturday 28 February 2026 00:51:00 +0000 (0:00:03.752) 0:01:54.406 ***** 2026-02-28 00:56:26.179236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-28 00:56:26.179244 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.179251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-28 00:56:26.179257 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.179267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-28 00:56:26.179278 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.179284 | orchestrator | 2026-02-28 00:56:26.179290 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-28 00:56:26.179296 | orchestrator | Saturday 28 February 2026 00:51:02 +0000 (0:00:02.446) 0:01:56.852 ***** 2026-02-28 00:56:26.179302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:56:26.179313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:56:26.179322 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.179328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:56:26.179334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:56:26.179340 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.179346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:56:26.179352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:56:26.179357 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.179363 | orchestrator | 2026-02-28 00:56:26.179369 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-28 00:56:26.179375 | orchestrator | Saturday 28 February 2026 00:51:06 +0000 (0:00:04.083) 0:02:00.936 ***** 2026-02-28 00:56:26.179381 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.179388 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.179394 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.179399 | orchestrator | 2026-02-28 00:56:26.179405 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-28 00:56:26.179411 | orchestrator | Saturday 28 February 2026 00:51:08 +0000 (0:00:01.617) 0:02:02.553 ***** 2026-02-28 00:56:26.179421 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.179427 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.179432 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.179454 | orchestrator | 2026-02-28 00:56:26.179461 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-28 00:56:26.179472 | orchestrator | Saturday 28 February 2026 00:51:10 +0000 (0:00:02.516) 0:02:05.070 ***** 2026-02-28 00:56:26.179478 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.179484 | orchestrator | 2026-02-28 00:56:26.179490 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-28 00:56:26.179495 | orchestrator | Saturday 28 February 2026 00:51:12 +0000 (0:00:01.837) 0:02:06.907 ***** 2026-02-28 00:56:26.179503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.179515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.179621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.179653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179681 | orchestrator | 2026-02-28 00:56:26.179687 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-28 00:56:26.179693 | orchestrator | Saturday 28 February 2026 00:51:21 +0000 (0:00:08.560) 0:02:15.467 ***** 2026-02-28 00:56:26.179703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.179709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179739 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.179746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.179756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179779 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.179786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.179798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.179821 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.179827 | orchestrator | 2026-02-28 00:56:26.179833 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-28 00:56:26.179840 | orchestrator | Saturday 28 February 2026 00:51:23 +0000 (0:00:02.403) 0:02:17.871 ***** 2026-02-28 00:56:26.179847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-28 00:56:26.179854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-28 00:56:26.179870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-28 00:56:26.179880 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.179886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-28 00:56:26.179892 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.179899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-28 00:56:26.179906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-28 00:56:26.179912 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.179918 | orchestrator | 2026-02-28 00:56:26.179925 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-28 00:56:26.179931 | orchestrator | Saturday 28 February 2026 00:51:25 +0000 (0:00:01.683) 0:02:19.555 ***** 2026-02-28 00:56:26.179938 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.179945 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.179951 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.179957 | orchestrator | 2026-02-28 00:56:26.179963 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-28 00:56:26.179970 | orchestrator | Saturday 28 February 2026 00:51:26 +0000 (0:00:01.342) 0:02:20.897 ***** 2026-02-28 00:56:26.179976 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.179983 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.179989 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.179995 | orchestrator | 2026-02-28 00:56:26.180007 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-28 00:56:26.180014 | orchestrator | Saturday 28 February 2026 00:51:29 +0000 (0:00:02.404) 0:02:23.302 ***** 2026-02-28 00:56:26.180022 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.180028 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.180034 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.180040 | orchestrator | 2026-02-28 00:56:26.180047 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-28 00:56:26.180053 | orchestrator | Saturday 28 February 2026 00:51:29 +0000 (0:00:00.809) 0:02:24.111 ***** 2026-02-28 00:56:26.180059 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.180065 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.180072 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.180078 | orchestrator | 2026-02-28 00:56:26.180085 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-28 00:56:26.180091 | orchestrator | Saturday 28 February 2026 00:51:30 +0000 (0:00:00.498) 0:02:24.610 ***** 2026-02-28 00:56:26.180098 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.180104 | orchestrator | 2026-02-28 00:56:26.180110 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-28 00:56:26.180117 | orchestrator | Saturday 28 February 2026 00:51:31 +0000 (0:00:01.433) 0:02:26.044 ***** 2026-02-28 00:56:26.180129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 00:56:26.180143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:56:26.180150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 00:56:26.180201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:56:26.180208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 00:56:26.180245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:56:26.180258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180308 | orchestrator | 2026-02-28 00:56:26.180315 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-28 00:56:26.180321 | orchestrator | Saturday 28 February 2026 00:51:38 +0000 (0:00:06.464) 0:02:32.508 ***** 2026-02-28 00:56:26.180327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 00:56:26.180336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:56:26.180342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180383 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.180389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 00:56:26.180399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:56:26.180414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 00:56:26.180420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:56:26.180432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180488 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.180494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.180517 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.180523 | orchestrator | 2026-02-28 00:56:26.180529 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-28 00:56:26.180536 | orchestrator | Saturday 28 February 2026 00:51:39 +0000 (0:00:00.967) 0:02:33.476 ***** 2026-02-28 00:56:26.180561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-28 00:56:26.180567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-28 00:56:26.180573 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.180580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-28 00:56:26.180586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-28 00:56:26.180592 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.180602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-28 00:56:26.180608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-28 00:56:26.180614 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.180620 | orchestrator | 2026-02-28 00:56:26.180626 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-28 00:56:26.180632 | orchestrator | Saturday 28 February 2026 00:51:40 +0000 (0:00:01.046) 0:02:34.522 ***** 2026-02-28 00:56:26.180638 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.180644 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.180650 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.180656 | orchestrator | 2026-02-28 00:56:26.180662 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-28 00:56:26.180668 | orchestrator | Saturday 28 February 2026 00:51:42 +0000 (0:00:02.000) 0:02:36.523 ***** 2026-02-28 00:56:26.180674 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.180681 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.180687 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.180693 | orchestrator | 2026-02-28 00:56:26.180699 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-28 00:56:26.180706 | orchestrator | Saturday 28 February 2026 00:51:44 +0000 (0:00:01.979) 0:02:38.502 ***** 2026-02-28 00:56:26.180712 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.180719 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.180725 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.180732 | orchestrator | 2026-02-28 00:56:26.180738 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-28 00:56:26.180745 | orchestrator | Saturday 28 February 2026 00:51:44 +0000 (0:00:00.606) 0:02:39.109 ***** 2026-02-28 00:56:26.180751 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.180757 | orchestrator | 2026-02-28 00:56:26.180763 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-28 00:56:26.180770 | orchestrator | Saturday 28 February 2026 00:51:45 +0000 (0:00:00.869) 0:02:39.979 ***** 2026-02-28 00:56:26.180784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 00:56:26.180802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:56:26.180810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 00:56:26.180825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:56:26.180833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 00:56:26.180870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:56:26.180878 | orchestrator | 2026-02-28 00:56:26.180885 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-28 00:56:26.180892 | orchestrator | Saturday 28 February 2026 00:51:51 +0000 (0:00:05.210) 0:02:45.189 ***** 2026-02-28 00:56:26.180898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 00:56:26.180915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:56:26.180922 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.180932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 00:56:26.180947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:56:26.180954 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.180964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 00:56:26.180973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:56:26.180985 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.180991 | orchestrator | 2026-02-28 00:56:26.180997 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-28 00:56:26.181003 | orchestrator | Saturday 28 February 2026 00:51:54 +0000 (0:00:03.453) 0:02:48.642 ***** 2026-02-28 00:56:26.181009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:56:26.181019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:56:26.181025 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.181031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:56:26.181037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:56:26.181049 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.181055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:56:26.181062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:56:26.181068 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.181074 | orchestrator | 2026-02-28 00:56:26.181081 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-28 00:56:26.181087 | orchestrator | Saturday 28 February 2026 00:51:59 +0000 (0:00:04.957) 0:02:53.599 ***** 2026-02-28 00:56:26.181093 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.181098 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.181104 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.181110 | orchestrator | 2026-02-28 00:56:26.181116 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-28 00:56:26.181123 | orchestrator | Saturday 28 February 2026 00:52:00 +0000 (0:00:01.449) 0:02:55.049 ***** 2026-02-28 00:56:26.181129 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.181135 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.181141 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.181146 | orchestrator | 2026-02-28 00:56:26.181156 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-28 00:56:26.181162 | orchestrator | Saturday 28 February 2026 00:52:03 +0000 (0:00:02.111) 0:02:57.160 ***** 2026-02-28 00:56:26.181168 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.181175 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.181181 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.181186 | orchestrator | 2026-02-28 00:56:26.181193 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-28 00:56:26.181199 | orchestrator | Saturday 28 February 2026 00:52:03 +0000 (0:00:00.494) 0:02:57.654 ***** 2026-02-28 00:56:26.181205 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.181211 | orchestrator | 2026-02-28 00:56:26.181218 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-28 00:56:26.181224 | orchestrator | Saturday 28 February 2026 00:52:04 +0000 (0:00:00.945) 0:02:58.600 ***** 2026-02-28 00:56:26.181239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 00:56:26.181247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 00:56:26.181258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 00:56:26.181264 | orchestrator | 2026-02-28 00:56:26.181270 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-28 00:56:26.181276 | orchestrator | Saturday 28 February 2026 00:52:08 +0000 (0:00:03.954) 0:03:02.555 ***** 2026-02-28 00:56:26.181283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 00:56:26.181295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 00:56:26.181302 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.181307 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.181314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 00:56:26.181320 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.181326 | orchestrator | 2026-02-28 00:56:26.181332 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-28 00:56:26.181343 | orchestrator | Saturday 28 February 2026 00:52:09 +0000 (0:00:00.801) 0:03:03.356 ***** 2026-02-28 00:56:26.181353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-28 00:56:26.181361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-28 00:56:26.181367 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.181374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-28 00:56:26.181381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-28 00:56:26.181387 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.181394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-28 00:56:26.181400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-28 00:56:26.181407 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.181414 | orchestrator | 2026-02-28 00:56:26.181419 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-28 00:56:26.181425 | orchestrator | Saturday 28 February 2026 00:52:09 +0000 (0:00:00.711) 0:03:04.067 ***** 2026-02-28 00:56:26.181431 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.181437 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.181442 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.181448 | orchestrator | 2026-02-28 00:56:26.181453 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-28 00:56:26.181459 | orchestrator | Saturday 28 February 2026 00:52:11 +0000 (0:00:01.350) 0:03:05.417 ***** 2026-02-28 00:56:26.181465 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.181471 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.181477 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.181484 | orchestrator | 2026-02-28 00:56:26.181490 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-28 00:56:26.181496 | orchestrator | Saturday 28 February 2026 00:52:13 +0000 (0:00:02.216) 0:03:07.633 ***** 2026-02-28 00:56:26.181502 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.181509 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.181515 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.181522 | orchestrator | 2026-02-28 00:56:26.181529 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-28 00:56:26.181536 | orchestrator | Saturday 28 February 2026 00:52:14 +0000 (0:00:00.745) 0:03:08.379 ***** 2026-02-28 00:56:26.181559 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.181566 | orchestrator | 2026-02-28 00:56:26.181572 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-28 00:56:26.181579 | orchestrator | Saturday 28 February 2026 00:52:15 +0000 (0:00:01.108) 0:03:09.488 ***** 2026-02-28 00:56:26.181598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 00:56:26.181617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 00:56:26.183370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 00:56:26.183599 | orchestrator | 2026-02-28 00:56:26.183648 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-28 00:56:26.183660 | orchestrator | Saturday 28 February 2026 00:52:19 +0000 (0:00:04.214) 0:03:13.702 ***** 2026-02-28 00:56:26.184143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 00:56:26.184176 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.184192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 00:56:26.184199 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.184213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 00:56:26.184227 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.184233 | orchestrator | 2026-02-28 00:56:26.184240 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-28 00:56:26.184246 | orchestrator | Saturday 28 February 2026 00:52:21 +0000 (0:00:01.677) 0:03:15.380 ***** 2026-02-28 00:56:26.184254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-28 00:56:26.184271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:56:26.184279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-28 00:56:26.184287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:56:26.184294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-28 00:56:26.184301 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.184308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-28 00:56:26.184315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:56:26.184321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-28 00:56:26.184327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:56:26.184339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-28 00:56:26.184345 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.184355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-28 00:56:26.184364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:56:26.184374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-28 00:56:26.184384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:56:26.184398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-28 00:56:26.184419 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.184430 | orchestrator | 2026-02-28 00:56:26.184440 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-28 00:56:26.184450 | orchestrator | Saturday 28 February 2026 00:52:22 +0000 (0:00:01.333) 0:03:16.714 ***** 2026-02-28 00:56:26.184460 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.184533 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.184629 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.184643 | orchestrator | 2026-02-28 00:56:26.184654 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-28 00:56:26.184664 | orchestrator | Saturday 28 February 2026 00:52:24 +0000 (0:00:01.436) 0:03:18.151 ***** 2026-02-28 00:56:26.184674 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.184685 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.184695 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.184706 | orchestrator | 2026-02-28 00:56:26.184716 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-28 00:56:26.184727 | orchestrator | Saturday 28 February 2026 00:52:26 +0000 (0:00:02.276) 0:03:20.428 ***** 2026-02-28 00:56:26.184738 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.184748 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.184759 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.184770 | orchestrator | 2026-02-28 00:56:26.184780 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-28 00:56:26.184790 | orchestrator | Saturday 28 February 2026 00:52:26 +0000 (0:00:00.364) 0:03:20.793 ***** 2026-02-28 00:56:26.184800 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.184810 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.184816 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.184835 | orchestrator | 2026-02-28 00:56:26.184841 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-28 00:56:26.184847 | orchestrator | Saturday 28 February 2026 00:52:27 +0000 (0:00:00.605) 0:03:21.398 ***** 2026-02-28 00:56:26.184854 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.184860 | orchestrator | 2026-02-28 00:56:26.184866 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-28 00:56:26.184873 | orchestrator | Saturday 28 February 2026 00:52:28 +0000 (0:00:00.982) 0:03:22.381 ***** 2026-02-28 00:56:26.184881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 00:56:26.184907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:56:26.184922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 00:56:26.184929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:56:26.184938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:56:26.184958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:56:26.184976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 00:56:26.184987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:56:26.185003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:56:26.185015 | orchestrator | 2026-02-28 00:56:26.185024 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-28 00:56:26.185033 | orchestrator | Saturday 28 February 2026 00:52:33 +0000 (0:00:04.792) 0:03:27.173 ***** 2026-02-28 00:56:26.185043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 00:56:26.185061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:56:26.185073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:56:26.185131 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.185150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 00:56:26.185167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:56:26.185177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:56:26.185195 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.185204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 00:56:26.185214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:56:26.185232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:56:26.185248 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.185257 | orchestrator | 2026-02-28 00:56:26.185265 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-28 00:56:26.185273 | orchestrator | Saturday 28 February 2026 00:52:33 +0000 (0:00:00.890) 0:03:28.063 ***** 2026-02-28 00:56:26.185282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-28 00:56:26.185292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-28 00:56:26.185302 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.185632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-28 00:56:26.185672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-28 00:56:26.185682 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.185691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-28 00:56:26.185701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-28 00:56:26.185709 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.185718 | orchestrator | 2026-02-28 00:56:26.185727 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-28 00:56:26.185736 | orchestrator | Saturday 28 February 2026 00:52:35 +0000 (0:00:01.549) 0:03:29.612 ***** 2026-02-28 00:56:26.185745 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.185754 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.185763 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.185772 | orchestrator | 2026-02-28 00:56:26.185781 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-28 00:56:26.185790 | orchestrator | Saturday 28 February 2026 00:52:36 +0000 (0:00:01.506) 0:03:31.119 ***** 2026-02-28 00:56:26.185796 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.185802 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.185807 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.185813 | orchestrator | 2026-02-28 00:56:26.185818 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-28 00:56:26.185823 | orchestrator | Saturday 28 February 2026 00:52:39 +0000 (0:00:02.772) 0:03:33.892 ***** 2026-02-28 00:56:26.185829 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.185834 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.185840 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.185845 | orchestrator | 2026-02-28 00:56:26.185851 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-28 00:56:26.185860 | orchestrator | Saturday 28 February 2026 00:52:40 +0000 (0:00:00.597) 0:03:34.489 ***** 2026-02-28 00:56:26.185869 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.185877 | orchestrator | 2026-02-28 00:56:26.185886 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-28 00:56:26.185895 | orchestrator | Saturday 28 February 2026 00:52:41 +0000 (0:00:01.035) 0:03:35.524 ***** 2026-02-28 00:56:26.185917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 00:56:26.185937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.185950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 00:56:26.185957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.185963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 00:56:26.185974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.185984 | orchestrator | 2026-02-28 00:56:26.185990 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-28 00:56:26.185996 | orchestrator | Saturday 28 February 2026 00:52:45 +0000 (0:00:04.455) 0:03:39.980 ***** 2026-02-28 00:56:26.186009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 00:56:26.186127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.186186 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.186263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 00:56:26.186272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.186278 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.186292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 00:56:26.186309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.186315 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.186320 | orchestrator | 2026-02-28 00:56:26.186326 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-28 00:56:26.186331 | orchestrator | Saturday 28 February 2026 00:52:46 +0000 (0:00:00.809) 0:03:40.789 ***** 2026-02-28 00:56:26.186338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-28 00:56:26.186344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-28 00:56:26.186350 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.186355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-28 00:56:26.186361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-28 00:56:26.186366 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.186372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-28 00:56:26.186377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-28 00:56:26.186383 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.186388 | orchestrator | 2026-02-28 00:56:26.186414 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-28 00:56:26.186420 | orchestrator | Saturday 28 February 2026 00:52:47 +0000 (0:00:00.820) 0:03:41.610 ***** 2026-02-28 00:56:26.186425 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.186431 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.186438 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.186447 | orchestrator | 2026-02-28 00:56:26.186457 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-28 00:56:26.186471 | orchestrator | Saturday 28 February 2026 00:52:48 +0000 (0:00:01.236) 0:03:42.847 ***** 2026-02-28 00:56:26.186481 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.186490 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.186506 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.186512 | orchestrator | 2026-02-28 00:56:26.186517 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-28 00:56:26.186523 | orchestrator | Saturday 28 February 2026 00:52:50 +0000 (0:00:02.077) 0:03:44.924 ***** 2026-02-28 00:56:26.186528 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.186534 | orchestrator | 2026-02-28 00:56:26.186539 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-28 00:56:26.186563 | orchestrator | Saturday 28 February 2026 00:52:52 +0000 (0:00:01.429) 0:03:46.354 ***** 2026-02-28 00:56:26.186886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-28 00:56:26.186906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.186938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.186945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.186951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-28 00:56:26.186971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.186978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.186988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.187002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-28 00:56:26.187008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.187015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.187028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.187035 | orchestrator | 2026-02-28 00:56:26.187040 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-28 00:56:26.187047 | orchestrator | Saturday 28 February 2026 00:52:56 +0000 (0:00:03.874) 0:03:50.228 ***** 2026-02-28 00:56:26.187053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-28 00:56:26.187062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.187068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.187074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.187084 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.187111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-28 00:56:26.187123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.187130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.187139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.187145 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.187151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-28 00:56:26.187162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.187209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.187221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.187227 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.187233 | orchestrator | 2026-02-28 00:56:26.187239 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-28 00:56:26.187245 | orchestrator | Saturday 28 February 2026 00:52:56 +0000 (0:00:00.758) 0:03:50.987 ***** 2026-02-28 00:56:26.187251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-28 00:56:26.187258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-28 00:56:26.187263 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.187269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-28 00:56:26.187278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-28 00:56:26.187284 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.187290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-28 00:56:26.187295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-28 00:56:26.187301 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.187307 | orchestrator | 2026-02-28 00:56:26.187313 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-28 00:56:26.187323 | orchestrator | Saturday 28 February 2026 00:52:58 +0000 (0:00:01.631) 0:03:52.619 ***** 2026-02-28 00:56:26.187328 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.187334 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.187340 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.187345 | orchestrator | 2026-02-28 00:56:26.187351 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-28 00:56:26.187357 | orchestrator | Saturday 28 February 2026 00:52:59 +0000 (0:00:01.369) 0:03:53.989 ***** 2026-02-28 00:56:26.187362 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.187368 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.187374 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.187380 | orchestrator | 2026-02-28 00:56:26.187385 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-28 00:56:26.187391 | orchestrator | Saturday 28 February 2026 00:53:02 +0000 (0:00:02.298) 0:03:56.287 ***** 2026-02-28 00:56:26.187397 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.187403 | orchestrator | 2026-02-28 00:56:26.187408 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-28 00:56:26.187414 | orchestrator | Saturday 28 February 2026 00:53:03 +0000 (0:00:01.383) 0:03:57.671 ***** 2026-02-28 00:56:26.187420 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-28 00:56:26.187426 | orchestrator | 2026-02-28 00:56:26.187432 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-28 00:56:26.187437 | orchestrator | Saturday 28 February 2026 00:53:06 +0000 (0:00:03.064) 0:04:00.736 ***** 2026-02-28 00:56:26.187448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:56:26.187458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:56:26.187471 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.187478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:56:26.187484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:56:26.187490 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.187501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:56:26.187533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:56:26.187540 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.187566 | orchestrator | 2026-02-28 00:56:26.187572 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-28 00:56:26.187578 | orchestrator | Saturday 28 February 2026 00:53:09 +0000 (0:00:02.756) 0:04:03.492 ***** 2026-02-28 00:56:26.187588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:56:26.187595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:56:26.187605 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.187614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:56:26.187620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:56:26.187626 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.187637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:56:26.187657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:56:26.187663 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.187669 | orchestrator | 2026-02-28 00:56:26.187674 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-28 00:56:26.187680 | orchestrator | Saturday 28 February 2026 00:53:11 +0000 (0:00:02.159) 0:04:05.652 ***** 2026-02-28 00:56:26.187686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:56:26.187693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:56:26.187698 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.187704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:56:26.187714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:56:26.187720 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.187725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:56:26.187738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:56:26.187744 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.187750 | orchestrator | 2026-02-28 00:56:26.187755 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-28 00:56:26.187761 | orchestrator | Saturday 28 February 2026 00:53:14 +0000 (0:00:02.556) 0:04:08.208 ***** 2026-02-28 00:56:26.187766 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.187772 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.187777 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.187783 | orchestrator | 2026-02-28 00:56:26.187809 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-28 00:56:26.187814 | orchestrator | Saturday 28 February 2026 00:53:15 +0000 (0:00:01.921) 0:04:10.129 ***** 2026-02-28 00:56:26.187820 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.187826 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.187831 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.187837 | orchestrator | 2026-02-28 00:56:26.187842 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-28 00:56:26.187847 | orchestrator | Saturday 28 February 2026 00:53:17 +0000 (0:00:01.596) 0:04:11.726 ***** 2026-02-28 00:56:26.187853 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.187858 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.187864 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.187905 | orchestrator | 2026-02-28 00:56:26.187912 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-28 00:56:26.187917 | orchestrator | Saturday 28 February 2026 00:53:17 +0000 (0:00:00.339) 0:04:12.065 ***** 2026-02-28 00:56:26.187923 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.187928 | orchestrator | 2026-02-28 00:56:26.187934 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-28 00:56:26.187940 | orchestrator | Saturday 28 February 2026 00:53:19 +0000 (0:00:01.411) 0:04:13.477 ***** 2026-02-28 00:56:26.187945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-28 00:56:26.187956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-28 00:56:26.187967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-28 00:56:26.187973 | orchestrator | 2026-02-28 00:56:26.187978 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-28 00:56:26.187984 | orchestrator | Saturday 28 February 2026 00:53:20 +0000 (0:00:01.484) 0:04:14.961 ***** 2026-02-28 00:56:26.187993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-28 00:56:26.187999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-28 00:56:26.188005 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.188011 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.188016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-28 00:56:26.188027 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.188033 | orchestrator | 2026-02-28 00:56:26.188038 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-28 00:56:26.188044 | orchestrator | Saturday 28 February 2026 00:53:21 +0000 (0:00:00.424) 0:04:15.385 ***** 2026-02-28 00:56:26.188050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-28 00:56:26.188055 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.188065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-28 00:56:26.188071 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.188077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-28 00:56:26.188082 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.188088 | orchestrator | 2026-02-28 00:56:26.188093 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-28 00:56:26.188099 | orchestrator | Saturday 28 February 2026 00:53:22 +0000 (0:00:00.865) 0:04:16.251 ***** 2026-02-28 00:56:26.188104 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.188110 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.188115 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.188121 | orchestrator | 2026-02-28 00:56:26.188126 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-28 00:56:26.188132 | orchestrator | Saturday 28 February 2026 00:53:22 +0000 (0:00:00.497) 0:04:16.748 ***** 2026-02-28 00:56:26.188137 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.188143 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.188148 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.188153 | orchestrator | 2026-02-28 00:56:26.188159 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-28 00:56:26.188167 | orchestrator | Saturday 28 February 2026 00:53:23 +0000 (0:00:01.149) 0:04:17.897 ***** 2026-02-28 00:56:26.188173 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.188178 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.188184 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.188189 | orchestrator | 2026-02-28 00:56:26.188195 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-28 00:56:26.188200 | orchestrator | Saturday 28 February 2026 00:53:24 +0000 (0:00:00.343) 0:04:18.241 ***** 2026-02-28 00:56:26.188206 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.188211 | orchestrator | 2026-02-28 00:56:26.188216 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-28 00:56:26.188222 | orchestrator | Saturday 28 February 2026 00:53:25 +0000 (0:00:01.565) 0:04:19.807 ***** 2026-02-28 00:56:26.188228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 00:56:26.188239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-28 00:56:26.188271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 00:56:26.188289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.188322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.188346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-28 00:56:26.188385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:26.188395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.188436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-28 00:56:26.188446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.188454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.188468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:26.188510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:56:26.188520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:26.188772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-28 00:56:26.188805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.188811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 00:56:26.188831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-28 00:56:26.188872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.188892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:56:26.188906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.188912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:26.188920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:26.188935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-28 00:56:26.188948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.188953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.188961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:56:26.188971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:26.188976 | orchestrator | 2026-02-28 00:56:26.188981 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-28 00:56:26.188986 | orchestrator | Saturday 28 February 2026 00:53:29 +0000 (0:00:04.336) 0:04:24.143 ***** 2026-02-28 00:56:26.188991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 00:56:26.188999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 00:56:26.189029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-28 00:56:26.189037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.189063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.189074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-28 00:56:26.189082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:26.189104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.189109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.189121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-28 00:56:26.189126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.189143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:26.189148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:56:26.189172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-28 00:56:26.189184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:26.189193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.189201 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.189209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 00:56:26.189229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:56:26.189249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:26.189266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189275 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.189282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-28 00:56:26.189309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.189325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.189330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:26.189342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-28 00:56:26.189362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:26.189373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:56:26.189385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:26.189391 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.189397 | orchestrator | 2026-02-28 00:56:26.189402 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-28 00:56:26.189413 | orchestrator | Saturday 28 February 2026 00:53:31 +0000 (0:00:01.665) 0:04:25.808 ***** 2026-02-28 00:56:26.189419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-28 00:56:26.189425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-28 00:56:26.189432 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.189440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-28 00:56:26.189446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-28 00:56:26.189452 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.189458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-28 00:56:26.189463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-28 00:56:26.189469 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.189474 | orchestrator | 2026-02-28 00:56:26.189479 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-28 00:56:26.189484 | orchestrator | Saturday 28 February 2026 00:53:33 +0000 (0:00:02.190) 0:04:27.999 ***** 2026-02-28 00:56:26.189489 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.189494 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.189499 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.189504 | orchestrator | 2026-02-28 00:56:26.189509 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-28 00:56:26.189514 | orchestrator | Saturday 28 February 2026 00:53:35 +0000 (0:00:01.325) 0:04:29.325 ***** 2026-02-28 00:56:26.189519 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.189524 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.189531 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.189536 | orchestrator | 2026-02-28 00:56:26.189541 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-28 00:56:26.189567 | orchestrator | Saturday 28 February 2026 00:53:37 +0000 (0:00:02.113) 0:04:31.439 ***** 2026-02-28 00:56:26.189576 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.189584 | orchestrator | 2026-02-28 00:56:26.189591 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-28 00:56:26.189598 | orchestrator | Saturday 28 February 2026 00:53:38 +0000 (0:00:01.340) 0:04:32.779 ***** 2026-02-28 00:56:26.189606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.189621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.189635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.189644 | orchestrator | 2026-02-28 00:56:26.189652 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-28 00:56:26.189660 | orchestrator | Saturday 28 February 2026 00:53:42 +0000 (0:00:04.109) 0:04:36.888 ***** 2026-02-28 00:56:26.189671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.189676 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.189681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.189691 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.189696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.189701 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.189706 | orchestrator | 2026-02-28 00:56:26.189711 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-28 00:56:26.189716 | orchestrator | Saturday 28 February 2026 00:53:43 +0000 (0:00:00.562) 0:04:37.451 ***** 2026-02-28 00:56:26.189720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-28 00:56:26.189726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-28 00:56:26.189731 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.189739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-28 00:56:26.189744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-28 00:56:26.189749 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.189754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-28 00:56:26.189759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-28 00:56:26.189765 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.189770 | orchestrator | 2026-02-28 00:56:26.189774 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-28 00:56:26.189779 | orchestrator | Saturday 28 February 2026 00:53:44 +0000 (0:00:00.823) 0:04:38.274 ***** 2026-02-28 00:56:26.189784 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.189789 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.189793 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.189798 | orchestrator | 2026-02-28 00:56:26.189804 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-28 00:56:26.189812 | orchestrator | Saturday 28 February 2026 00:53:46 +0000 (0:00:02.070) 0:04:40.345 ***** 2026-02-28 00:56:26.189817 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.189822 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.189827 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.189832 | orchestrator | 2026-02-28 00:56:26.189837 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-28 00:56:26.189847 | orchestrator | Saturday 28 February 2026 00:53:48 +0000 (0:00:01.990) 0:04:42.336 ***** 2026-02-28 00:56:26.189852 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.189857 | orchestrator | 2026-02-28 00:56:26.189861 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-28 00:56:26.189866 | orchestrator | Saturday 28 February 2026 00:53:49 +0000 (0:00:01.651) 0:04:43.987 ***** 2026-02-28 00:56:26.189872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.189879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.189906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.189916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.190042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.190054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.190065 | orchestrator | 2026-02-28 00:56:26.190070 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-28 00:56:26.190076 | orchestrator | Saturday 28 February 2026 00:53:54 +0000 (0:00:04.390) 0:04:48.378 ***** 2026-02-28 00:56:26.190085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.190090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.190095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.190101 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.190112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.190121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.190128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.190137 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.190208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.190225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.190235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.190240 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.190245 | orchestrator | 2026-02-28 00:56:26.190254 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-28 00:56:26.190260 | orchestrator | Saturday 28 February 2026 00:53:55 +0000 (0:00:01.057) 0:04:49.436 ***** 2026-02-28 00:56:26.190265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-28 00:56:26.190271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-28 00:56:26.190279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-28 00:56:26.190285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-28 00:56:26.190290 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.190296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-28 00:56:26.190304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-28 00:56:26.190312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-28 00:56:26.190320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-28 00:56:26.190327 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.190335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-28 00:56:26.190343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-28 00:56:26.190350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-28 00:56:26.190357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-28 00:56:26.190365 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.190372 | orchestrator | 2026-02-28 00:56:26.190379 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-28 00:56:26.190386 | orchestrator | Saturday 28 February 2026 00:53:56 +0000 (0:00:00.846) 0:04:50.282 ***** 2026-02-28 00:56:26.190394 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.190402 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.190410 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.190418 | orchestrator | 2026-02-28 00:56:26.190425 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-28 00:56:26.190432 | orchestrator | Saturday 28 February 2026 00:53:57 +0000 (0:00:01.332) 0:04:51.615 ***** 2026-02-28 00:56:26.190446 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.190454 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.190462 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.190470 | orchestrator | 2026-02-28 00:56:26.190483 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-28 00:56:26.190492 | orchestrator | Saturday 28 February 2026 00:53:59 +0000 (0:00:02.203) 0:04:53.818 ***** 2026-02-28 00:56:26.190500 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.190509 | orchestrator | 2026-02-28 00:56:26.190539 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-28 00:56:26.190589 | orchestrator | Saturday 28 February 2026 00:54:01 +0000 (0:00:01.765) 0:04:55.583 ***** 2026-02-28 00:56:26.190596 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-28 00:56:26.190602 | orchestrator | 2026-02-28 00:56:26.190607 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-28 00:56:26.190612 | orchestrator | Saturday 28 February 2026 00:54:02 +0000 (0:00:01.124) 0:04:56.707 ***** 2026-02-28 00:56:26.190617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-28 00:56:26.190628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-28 00:56:26.190633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-28 00:56:26.190640 | orchestrator | 2026-02-28 00:56:26.190645 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-28 00:56:26.190651 | orchestrator | Saturday 28 February 2026 00:54:07 +0000 (0:00:05.175) 0:05:01.883 ***** 2026-02-28 00:56:26.190657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:26.190663 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.190668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:26.190679 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.190685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:26.190691 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.190696 | orchestrator | 2026-02-28 00:56:26.190705 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-28 00:56:26.190711 | orchestrator | Saturday 28 February 2026 00:54:09 +0000 (0:00:01.450) 0:05:03.333 ***** 2026-02-28 00:56:26.190717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:56:26.190724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:56:26.190730 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.190737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:56:26.190746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:56:26.190753 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.190765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:56:26.190773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:56:26.190781 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.190790 | orchestrator | 2026-02-28 00:56:26.190798 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-28 00:56:26.190806 | orchestrator | Saturday 28 February 2026 00:54:10 +0000 (0:00:01.722) 0:05:05.056 ***** 2026-02-28 00:56:26.190812 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.190817 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.190821 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.190826 | orchestrator | 2026-02-28 00:56:26.190831 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-28 00:56:26.190836 | orchestrator | Saturday 28 February 2026 00:54:13 +0000 (0:00:02.671) 0:05:07.727 ***** 2026-02-28 00:56:26.190841 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.190846 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.190851 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.190855 | orchestrator | 2026-02-28 00:56:26.190860 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-28 00:56:26.190866 | orchestrator | Saturday 28 February 2026 00:54:16 +0000 (0:00:03.236) 0:05:10.964 ***** 2026-02-28 00:56:26.190880 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-28 00:56:26.190889 | orchestrator | 2026-02-28 00:56:26.190896 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-28 00:56:26.190904 | orchestrator | Saturday 28 February 2026 00:54:18 +0000 (0:00:01.750) 0:05:12.715 ***** 2026-02-28 00:56:26.190912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:26.190918 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.190923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:26.190928 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.190937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:26.190942 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.190947 | orchestrator | 2026-02-28 00:56:26.190952 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-28 00:56:26.190957 | orchestrator | Saturday 28 February 2026 00:54:20 +0000 (0:00:01.438) 0:05:14.153 ***** 2026-02-28 00:56:26.190962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:26.190970 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.190982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:26.190991 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.190999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:26.191012 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.191018 | orchestrator | 2026-02-28 00:56:26.191023 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-28 00:56:26.191028 | orchestrator | Saturday 28 February 2026 00:54:21 +0000 (0:00:01.480) 0:05:15.634 ***** 2026-02-28 00:56:26.191032 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.191037 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.191042 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.191047 | orchestrator | 2026-02-28 00:56:26.191052 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-28 00:56:26.191056 | orchestrator | Saturday 28 February 2026 00:54:23 +0000 (0:00:02.037) 0:05:17.672 ***** 2026-02-28 00:56:26.191061 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.191067 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.191071 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.191076 | orchestrator | 2026-02-28 00:56:26.191081 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-28 00:56:26.191086 | orchestrator | Saturday 28 February 2026 00:54:25 +0000 (0:00:02.455) 0:05:20.127 ***** 2026-02-28 00:56:26.191091 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.191095 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.191100 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.191105 | orchestrator | 2026-02-28 00:56:26.191110 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-28 00:56:26.191115 | orchestrator | Saturday 28 February 2026 00:54:28 +0000 (0:00:02.879) 0:05:23.006 ***** 2026-02-28 00:56:26.191120 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-28 00:56:26.191125 | orchestrator | 2026-02-28 00:56:26.191130 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-28 00:56:26.191134 | orchestrator | Saturday 28 February 2026 00:54:29 +0000 (0:00:00.835) 0:05:23.842 ***** 2026-02-28 00:56:26.191142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:56:26.191147 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.191151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:56:26.191156 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.191161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:56:26.191169 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.191174 | orchestrator | 2026-02-28 00:56:26.191181 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-28 00:56:26.191186 | orchestrator | Saturday 28 February 2026 00:54:30 +0000 (0:00:01.198) 0:05:25.040 ***** 2026-02-28 00:56:26.191191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:56:26.191196 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.191200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:56:26.191205 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.191210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:56:26.191215 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.191219 | orchestrator | 2026-02-28 00:56:26.191224 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-28 00:56:26.191228 | orchestrator | Saturday 28 February 2026 00:54:32 +0000 (0:00:01.346) 0:05:26.386 ***** 2026-02-28 00:56:26.191233 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.191238 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.191242 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.191247 | orchestrator | 2026-02-28 00:56:26.191251 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-28 00:56:26.191256 | orchestrator | Saturday 28 February 2026 00:54:33 +0000 (0:00:01.509) 0:05:27.895 ***** 2026-02-28 00:56:26.191260 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.191265 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.191270 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.191274 | orchestrator | 2026-02-28 00:56:26.191279 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-28 00:56:26.191284 | orchestrator | Saturday 28 February 2026 00:54:36 +0000 (0:00:02.615) 0:05:30.511 ***** 2026-02-28 00:56:26.191288 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.191293 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.191297 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.191302 | orchestrator | 2026-02-28 00:56:26.191307 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-28 00:56:26.191311 | orchestrator | Saturday 28 February 2026 00:54:39 +0000 (0:00:03.622) 0:05:34.133 ***** 2026-02-28 00:56:26.191318 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.191323 | orchestrator | 2026-02-28 00:56:26.191331 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-28 00:56:26.191336 | orchestrator | Saturday 28 February 2026 00:54:41 +0000 (0:00:01.784) 0:05:35.917 ***** 2026-02-28 00:56:26.191341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.191349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:56:26.191355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.191360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.191365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.191373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.191383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:56:26.191391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.191396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.191400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.191405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.191417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:56:26.191422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.191430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.191435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.191439 | orchestrator | 2026-02-28 00:56:26.191444 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-28 00:56:26.191449 | orchestrator | Saturday 28 February 2026 00:54:45 +0000 (0:00:04.105) 0:05:40.023 ***** 2026-02-28 00:56:26.191454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.191459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:56:26.191470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.191476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.191483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.191488 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.191493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.191497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:56:26.191502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.191513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.191518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.191523 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.191530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.191536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:56:26.191540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.191560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:56:26.191573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:26.191578 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.191582 | orchestrator | 2026-02-28 00:56:26.191587 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-28 00:56:26.191592 | orchestrator | Saturday 28 February 2026 00:54:46 +0000 (0:00:00.765) 0:05:40.788 ***** 2026-02-28 00:56:26.191598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:56:26.191606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:56:26.191613 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.191621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:56:26.191629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:56:26.191637 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.191648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:56:26.191656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:56:26.191663 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.191668 | orchestrator | 2026-02-28 00:56:26.191672 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-28 00:56:26.191677 | orchestrator | Saturday 28 February 2026 00:54:47 +0000 (0:00:01.352) 0:05:42.140 ***** 2026-02-28 00:56:26.191682 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.191686 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.191691 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.191696 | orchestrator | 2026-02-28 00:56:26.191700 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-28 00:56:26.191705 | orchestrator | Saturday 28 February 2026 00:54:49 +0000 (0:00:01.418) 0:05:43.559 ***** 2026-02-28 00:56:26.191710 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.191714 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.191719 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.191723 | orchestrator | 2026-02-28 00:56:26.191728 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-28 00:56:26.191736 | orchestrator | Saturday 28 February 2026 00:54:51 +0000 (0:00:02.043) 0:05:45.602 ***** 2026-02-28 00:56:26.191741 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.191745 | orchestrator | 2026-02-28 00:56:26.191750 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-28 00:56:26.191754 | orchestrator | Saturday 28 February 2026 00:54:52 +0000 (0:00:01.314) 0:05:46.916 ***** 2026-02-28 00:56:26.191760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:56:26.191768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:56:26.191773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:56:26.191781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:56:26.191792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:56:26.191800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:56:26.191805 | orchestrator | 2026-02-28 00:56:26.191810 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-28 00:56:26.191814 | orchestrator | Saturday 28 February 2026 00:54:58 +0000 (0:00:05.722) 0:05:52.639 ***** 2026-02-28 00:56:26.191819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:56:26.191827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:56:26.191836 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.191841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:56:26.191850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:56:26.191855 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.191860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:56:26.191869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:56:26.191878 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.191883 | orchestrator | 2026-02-28 00:56:26.191888 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-28 00:56:26.191896 | orchestrator | Saturday 28 February 2026 00:54:59 +0000 (0:00:00.667) 0:05:53.307 ***** 2026-02-28 00:56:26.191901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-28 00:56:26.191905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-28 00:56:26.191911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-28 00:56:26.191916 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.191920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-28 00:56:26.191925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-28 00:56:26.191930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-28 00:56:26.191935 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.191939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-28 00:56:26.192007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-28 00:56:26.192014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-28 00:56:26.192019 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.192023 | orchestrator | 2026-02-28 00:56:26.192028 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-28 00:56:26.192033 | orchestrator | Saturday 28 February 2026 00:55:00 +0000 (0:00:00.942) 0:05:54.249 ***** 2026-02-28 00:56:26.192037 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.192042 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.192046 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.192051 | orchestrator | 2026-02-28 00:56:26.192055 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-28 00:56:26.192060 | orchestrator | Saturday 28 February 2026 00:55:00 +0000 (0:00:00.789) 0:05:55.039 ***** 2026-02-28 00:56:26.192065 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.192073 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.192078 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.192082 | orchestrator | 2026-02-28 00:56:26.192087 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-28 00:56:26.192092 | orchestrator | Saturday 28 February 2026 00:55:02 +0000 (0:00:01.228) 0:05:56.267 ***** 2026-02-28 00:56:26.192099 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.192106 | orchestrator | 2026-02-28 00:56:26.192117 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-28 00:56:26.192126 | orchestrator | Saturday 28 February 2026 00:55:03 +0000 (0:00:01.371) 0:05:57.638 ***** 2026-02-28 00:56:26.192133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-28 00:56:26.192141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:56:26.192149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-28 00:56:26.192157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:56:26.192196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:56:26.192224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:56:26.192234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-28 00:56:26.192254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:56:26.192264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:56:26.192289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-28 00:56:26.192297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-28 00:56:26.192309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:56:26.192345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-28 00:56:26.192354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-28 00:56:26.192362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:56:26.192449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-28 00:56:26.192455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-28 00:56:26.192460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:56:26.192478 | orchestrator | 2026-02-28 00:56:26.192487 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-28 00:56:26.192492 | orchestrator | Saturday 28 February 2026 00:55:08 +0000 (0:00:04.567) 0:06:02.206 ***** 2026-02-28 00:56:26.192496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-28 00:56:26.192504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:56:26.192509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:56:26.192524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-28 00:56:26.192541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-28 00:56:26.192572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:56:26.192595 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.192604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-28 00:56:26.192612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:56:26.192630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:56:26.192660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-28 00:56:26.192666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-28 00:56:26.192671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:56:26.192723 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.192732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-28 00:56:26.192737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:56:26.192742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:56:26.192764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-28 00:56:26.192769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-28 00:56:26.192777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:26.192787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:56:26.192791 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.192799 | orchestrator | 2026-02-28 00:56:26.192804 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-28 00:56:26.192809 | orchestrator | Saturday 28 February 2026 00:55:09 +0000 (0:00:01.928) 0:06:04.134 ***** 2026-02-28 00:56:26.192814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-28 00:56:26.192820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-28 00:56:26.192825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-28 00:56:26.192831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-28 00:56:26.192836 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.192844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-28 00:56:26.192849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-28 00:56:26.192854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-28 00:56:26.192859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-28 00:56:26.192864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-28 00:56:26.192872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-28 00:56:26.192877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-28 00:56:26.192882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-28 00:56:26.192886 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.192891 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.192895 | orchestrator | 2026-02-28 00:56:26.192900 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-28 00:56:26.192908 | orchestrator | Saturday 28 February 2026 00:55:11 +0000 (0:00:01.213) 0:06:05.347 ***** 2026-02-28 00:56:26.192913 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.192918 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.192923 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.192927 | orchestrator | 2026-02-28 00:56:26.192942 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-28 00:56:26.192946 | orchestrator | Saturday 28 February 2026 00:55:11 +0000 (0:00:00.483) 0:06:05.831 ***** 2026-02-28 00:56:26.192951 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.192956 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.192961 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.192965 | orchestrator | 2026-02-28 00:56:26.192970 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-28 00:56:26.192975 | orchestrator | Saturday 28 February 2026 00:55:13 +0000 (0:00:01.613) 0:06:07.444 ***** 2026-02-28 00:56:26.192979 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.192984 | orchestrator | 2026-02-28 00:56:26.192988 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-28 00:56:26.192993 | orchestrator | Saturday 28 February 2026 00:55:15 +0000 (0:00:02.026) 0:06:09.470 ***** 2026-02-28 00:56:26.193001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:56:26.193007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:56:26.193015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:56:26.193024 | orchestrator | 2026-02-28 00:56:26.193029 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-28 00:56:26.193034 | orchestrator | Saturday 28 February 2026 00:55:18 +0000 (0:00:02.830) 0:06:12.301 ***** 2026-02-28 00:56:26.193039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:56:26.193044 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.193051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:56:26.193057 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.193062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:56:26.193067 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.193071 | orchestrator | 2026-02-28 00:56:26.193076 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-28 00:56:26.193084 | orchestrator | Saturday 28 February 2026 00:55:18 +0000 (0:00:00.448) 0:06:12.750 ***** 2026-02-28 00:56:26.193092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-28 00:56:26.193097 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.193101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-28 00:56:26.193106 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.193111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-28 00:56:26.193115 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.193120 | orchestrator | 2026-02-28 00:56:26.193124 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-28 00:56:26.193129 | orchestrator | Saturday 28 February 2026 00:55:19 +0000 (0:00:01.137) 0:06:13.888 ***** 2026-02-28 00:56:26.193134 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.193138 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.193143 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.193147 | orchestrator | 2026-02-28 00:56:26.193152 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-28 00:56:26.193157 | orchestrator | Saturday 28 February 2026 00:55:20 +0000 (0:00:00.423) 0:06:14.312 ***** 2026-02-28 00:56:26.193161 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.193166 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.193170 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.193175 | orchestrator | 2026-02-28 00:56:26.193180 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-28 00:56:26.193184 | orchestrator | Saturday 28 February 2026 00:55:21 +0000 (0:00:01.583) 0:06:15.896 ***** 2026-02-28 00:56:26.193189 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:26.193193 | orchestrator | 2026-02-28 00:56:26.193198 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-28 00:56:26.193203 | orchestrator | Saturday 28 February 2026 00:55:23 +0000 (0:00:02.062) 0:06:17.959 ***** 2026-02-28 00:56:26.193208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.193216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.193230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.193236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.193241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.193249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:26.193254 | orchestrator | 2026-02-28 00:56:26.193259 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-28 00:56:26.193267 | orchestrator | Saturday 28 February 2026 00:55:30 +0000 (0:00:07.017) 0:06:24.976 ***** 2026-02-28 00:56:26.193275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.193280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.193285 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.193290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.193297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.193305 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.193310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.193317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-28 00:56:26.193322 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.193327 | orchestrator | 2026-02-28 00:56:26.193332 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-28 00:56:26.193336 | orchestrator | Saturday 28 February 2026 00:55:31 +0000 (0:00:00.692) 0:06:25.669 ***** 2026-02-28 00:56:26.193341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-28 00:56:26.193346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-28 00:56:26.193351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-28 00:56:26.193356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-28 00:56:26.193360 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.193365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-28 00:56:26.193370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-28 00:56:26.193375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-28 00:56:26.193385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-28 00:56:26.193390 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.193394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-28 00:56:26.193399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-28 00:56:26.193404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-28 00:56:26.193408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-28 00:56:26.193413 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.193418 | orchestrator | 2026-02-28 00:56:26.193423 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-28 00:56:26.193427 | orchestrator | Saturday 28 February 2026 00:55:33 +0000 (0:00:01.822) 0:06:27.492 ***** 2026-02-28 00:56:26.193432 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.193437 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.193441 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.193446 | orchestrator | 2026-02-28 00:56:26.193450 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-28 00:56:26.193455 | orchestrator | Saturday 28 February 2026 00:55:34 +0000 (0:00:01.356) 0:06:28.848 ***** 2026-02-28 00:56:26.193460 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.193464 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.193469 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.193473 | orchestrator | 2026-02-28 00:56:26.193517 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-28 00:56:26.193530 | orchestrator | Saturday 28 February 2026 00:55:36 +0000 (0:00:02.207) 0:06:31.056 ***** 2026-02-28 00:56:26.193535 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.193539 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.193607 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.193614 | orchestrator | 2026-02-28 00:56:26.193619 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-28 00:56:26.193624 | orchestrator | Saturday 28 February 2026 00:55:37 +0000 (0:00:00.346) 0:06:31.403 ***** 2026-02-28 00:56:26.193629 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.193634 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.193641 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.193649 | orchestrator | 2026-02-28 00:56:26.193656 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-28 00:56:26.193664 | orchestrator | Saturday 28 February 2026 00:55:37 +0000 (0:00:00.340) 0:06:31.743 ***** 2026-02-28 00:56:26.193671 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.193678 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.193685 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.193693 | orchestrator | 2026-02-28 00:56:26.193699 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-28 00:56:26.193706 | orchestrator | Saturday 28 February 2026 00:55:38 +0000 (0:00:00.712) 0:06:32.456 ***** 2026-02-28 00:56:26.193713 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.193727 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.193735 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.193742 | orchestrator | 2026-02-28 00:56:26.193749 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-28 00:56:26.193757 | orchestrator | Saturday 28 February 2026 00:55:38 +0000 (0:00:00.376) 0:06:32.832 ***** 2026-02-28 00:56:26.193764 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.193771 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.193780 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.193785 | orchestrator | 2026-02-28 00:56:26.193789 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-28 00:56:26.193794 | orchestrator | Saturday 28 February 2026 00:55:39 +0000 (0:00:00.429) 0:06:33.261 ***** 2026-02-28 00:56:26.193800 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.193807 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.193814 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.193822 | orchestrator | 2026-02-28 00:56:26.193830 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-28 00:56:26.193838 | orchestrator | Saturday 28 February 2026 00:55:40 +0000 (0:00:00.912) 0:06:34.174 ***** 2026-02-28 00:56:26.193845 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.193852 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.193859 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.193867 | orchestrator | 2026-02-28 00:56:26.193874 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-28 00:56:26.193882 | orchestrator | Saturday 28 February 2026 00:55:40 +0000 (0:00:00.741) 0:06:34.915 ***** 2026-02-28 00:56:26.193890 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.193897 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.193905 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.193912 | orchestrator | 2026-02-28 00:56:26.193920 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-28 00:56:26.193926 | orchestrator | Saturday 28 February 2026 00:55:41 +0000 (0:00:00.429) 0:06:35.345 ***** 2026-02-28 00:56:26.193931 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.193936 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.193941 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.193945 | orchestrator | 2026-02-28 00:56:26.193957 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-28 00:56:26.193962 | orchestrator | Saturday 28 February 2026 00:55:42 +0000 (0:00:01.056) 0:06:36.401 ***** 2026-02-28 00:56:26.193967 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.193972 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.193976 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.193981 | orchestrator | 2026-02-28 00:56:26.193985 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-28 00:56:26.193990 | orchestrator | Saturday 28 February 2026 00:55:43 +0000 (0:00:01.260) 0:06:37.661 ***** 2026-02-28 00:56:26.193994 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.193999 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.194003 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.194008 | orchestrator | 2026-02-28 00:56:26.194035 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-28 00:56:26.194041 | orchestrator | Saturday 28 February 2026 00:55:44 +0000 (0:00:00.919) 0:06:38.581 ***** 2026-02-28 00:56:26.194046 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.194050 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.194055 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.194059 | orchestrator | 2026-02-28 00:56:26.194064 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-28 00:56:26.194069 | orchestrator | Saturday 28 February 2026 00:55:53 +0000 (0:00:08.595) 0:06:47.176 ***** 2026-02-28 00:56:26.194073 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.194078 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.194082 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.194092 | orchestrator | 2026-02-28 00:56:26.194096 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-28 00:56:26.194101 | orchestrator | Saturday 28 February 2026 00:55:53 +0000 (0:00:00.780) 0:06:47.956 ***** 2026-02-28 00:56:26.194105 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.194110 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.194119 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.194124 | orchestrator | 2026-02-28 00:56:26.194129 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-28 00:56:26.194134 | orchestrator | Saturday 28 February 2026 00:56:04 +0000 (0:00:10.881) 0:06:58.838 ***** 2026-02-28 00:56:26.194138 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.194143 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.194147 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.194151 | orchestrator | 2026-02-28 00:56:26.194155 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-28 00:56:26.194159 | orchestrator | Saturday 28 February 2026 00:56:09 +0000 (0:00:05.146) 0:07:03.985 ***** 2026-02-28 00:56:26.194163 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:26.194168 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:26.194172 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:26.194178 | orchestrator | 2026-02-28 00:56:26.194184 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-28 00:56:26.194190 | orchestrator | Saturday 28 February 2026 00:56:18 +0000 (0:00:08.190) 0:07:12.176 ***** 2026-02-28 00:56:26.194197 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.194204 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.194210 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.194217 | orchestrator | 2026-02-28 00:56:26.194225 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-28 00:56:26.194230 | orchestrator | Saturday 28 February 2026 00:56:18 +0000 (0:00:00.356) 0:07:12.532 ***** 2026-02-28 00:56:26.194235 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.194239 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.194243 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.194247 | orchestrator | 2026-02-28 00:56:26.194251 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-28 00:56:26.194257 | orchestrator | Saturday 28 February 2026 00:56:18 +0000 (0:00:00.346) 0:07:12.879 ***** 2026-02-28 00:56:26.194264 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.194272 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.194278 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.194285 | orchestrator | 2026-02-28 00:56:26.194291 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-28 00:56:26.194296 | orchestrator | Saturday 28 February 2026 00:56:19 +0000 (0:00:00.735) 0:07:13.614 ***** 2026-02-28 00:56:26.194300 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.194304 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.194308 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.194312 | orchestrator | 2026-02-28 00:56:26.194316 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-28 00:56:26.194321 | orchestrator | Saturday 28 February 2026 00:56:19 +0000 (0:00:00.424) 0:07:14.039 ***** 2026-02-28 00:56:26.194325 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.194329 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.194333 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.194337 | orchestrator | 2026-02-28 00:56:26.194341 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-28 00:56:26.194345 | orchestrator | Saturday 28 February 2026 00:56:20 +0000 (0:00:00.372) 0:07:14.411 ***** 2026-02-28 00:56:26.194349 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:26.194353 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:26.194358 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:26.194362 | orchestrator | 2026-02-28 00:56:26.194366 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-28 00:56:26.194375 | orchestrator | Saturday 28 February 2026 00:56:20 +0000 (0:00:00.394) 0:07:14.805 ***** 2026-02-28 00:56:26.194379 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.194383 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.194387 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.194391 | orchestrator | 2026-02-28 00:56:26.194396 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-28 00:56:26.194400 | orchestrator | Saturday 28 February 2026 00:56:22 +0000 (0:00:01.369) 0:07:16.175 ***** 2026-02-28 00:56:26.194404 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:26.194408 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:26.194412 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:26.194416 | orchestrator | 2026-02-28 00:56:26.194420 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:56:26.194429 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-28 00:56:26.194434 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-28 00:56:26.194438 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-28 00:56:26.194442 | orchestrator | 2026-02-28 00:56:26.194446 | orchestrator | 2026-02-28 00:56:26.194450 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:56:26.194455 | orchestrator | Saturday 28 February 2026 00:56:22 +0000 (0:00:00.886) 0:07:17.061 ***** 2026-02-28 00:56:26.194459 | orchestrator | =============================================================================== 2026-02-28 00:56:26.194463 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.88s 2026-02-28 00:56:26.194467 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.60s 2026-02-28 00:56:26.194471 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 8.56s 2026-02-28 00:56:26.194475 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.19s 2026-02-28 00:56:26.194479 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 7.05s 2026-02-28 00:56:26.194483 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.02s 2026-02-28 00:56:26.194487 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.46s 2026-02-28 00:56:26.194494 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.72s 2026-02-28 00:56:26.194499 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 5.24s 2026-02-28 00:56:26.194503 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.21s 2026-02-28 00:56:26.194507 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.18s 2026-02-28 00:56:26.194511 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 5.15s 2026-02-28 00:56:26.194515 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.96s 2026-02-28 00:56:26.194519 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 4.95s 2026-02-28 00:56:26.194523 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.79s 2026-02-28 00:56:26.194528 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.58s 2026-02-28 00:56:26.194532 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.57s 2026-02-28 00:56:26.194536 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.57s 2026-02-28 00:56:26.194540 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.52s 2026-02-28 00:56:26.194559 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.46s 2026-02-28 00:56:26.194568 | orchestrator | 2026-02-28 00:56:26 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:56:26.194573 | orchestrator | 2026-02-28 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:29.228155 | orchestrator | 2026-02-28 00:56:29 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:56:29.229783 | orchestrator | 2026-02-28 00:56:29 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:29.230648 | orchestrator | 2026-02-28 00:56:29 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:56:29.230690 | orchestrator | 2026-02-28 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:32.275363 | orchestrator | 2026-02-28 00:56:32 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:56:32.275980 | orchestrator | 2026-02-28 00:56:32 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:32.278396 | orchestrator | 2026-02-28 00:56:32 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:56:32.278436 | orchestrator | 2026-02-28 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:35.317232 | orchestrator | 2026-02-28 00:56:35 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:56:35.317705 | orchestrator | 2026-02-28 00:56:35 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:35.318872 | orchestrator | 2026-02-28 00:56:35 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:56:35.318979 | orchestrator | 2026-02-28 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:38.396093 | orchestrator | 2026-02-28 00:56:38 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:56:38.396372 | orchestrator | 2026-02-28 00:56:38 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:38.397516 | orchestrator | 2026-02-28 00:56:38 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:56:38.397587 | orchestrator | 2026-02-28 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:41.423169 | orchestrator | 2026-02-28 00:56:41 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:56:41.426258 | orchestrator | 2026-02-28 00:56:41 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:41.427177 | orchestrator | 2026-02-28 00:56:41 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:56:41.427255 | orchestrator | 2026-02-28 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:44.469680 | orchestrator | 2026-02-28 00:56:44 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:56:44.471860 | orchestrator | 2026-02-28 00:56:44 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:44.473923 | orchestrator | 2026-02-28 00:56:44 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:56:44.474443 | orchestrator | 2026-02-28 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:47.512296 | orchestrator | 2026-02-28 00:56:47 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:56:47.512398 | orchestrator | 2026-02-28 00:56:47 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:47.512712 | orchestrator | 2026-02-28 00:56:47 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:56:47.512832 | orchestrator | 2026-02-28 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:50.558428 | orchestrator | 2026-02-28 00:56:50 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:56:50.559619 | orchestrator | 2026-02-28 00:56:50 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:50.560606 | orchestrator | 2026-02-28 00:56:50 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:56:50.560686 | orchestrator | 2026-02-28 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:53.596361 | orchestrator | 2026-02-28 00:56:53 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:56:53.596870 | orchestrator | 2026-02-28 00:56:53 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:53.597681 | orchestrator | 2026-02-28 00:56:53 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:56:53.597728 | orchestrator | 2026-02-28 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:56.646218 | orchestrator | 2026-02-28 00:56:56 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:56:56.647302 | orchestrator | 2026-02-28 00:56:56 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:56.650103 | orchestrator | 2026-02-28 00:56:56 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:56:56.650710 | orchestrator | 2026-02-28 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:59.697475 | orchestrator | 2026-02-28 00:56:59 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:56:59.698248 | orchestrator | 2026-02-28 00:56:59 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:56:59.699873 | orchestrator | 2026-02-28 00:56:59 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:56:59.699945 | orchestrator | 2026-02-28 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:02.749727 | orchestrator | 2026-02-28 00:57:02 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:02.750107 | orchestrator | 2026-02-28 00:57:02 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:02.752656 | orchestrator | 2026-02-28 00:57:02 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:02.752695 | orchestrator | 2026-02-28 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:05.792219 | orchestrator | 2026-02-28 00:57:05 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:05.793313 | orchestrator | 2026-02-28 00:57:05 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:05.795024 | orchestrator | 2026-02-28 00:57:05 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:05.795156 | orchestrator | 2026-02-28 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:08.855959 | orchestrator | 2026-02-28 00:57:08 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:08.857170 | orchestrator | 2026-02-28 00:57:08 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:08.858267 | orchestrator | 2026-02-28 00:57:08 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:08.858306 | orchestrator | 2026-02-28 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:11.919835 | orchestrator | 2026-02-28 00:57:11 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:11.920484 | orchestrator | 2026-02-28 00:57:11 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:11.922896 | orchestrator | 2026-02-28 00:57:11 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:11.923703 | orchestrator | 2026-02-28 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:14.967983 | orchestrator | 2026-02-28 00:57:14 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:14.969523 | orchestrator | 2026-02-28 00:57:14 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:14.972103 | orchestrator | 2026-02-28 00:57:14 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:14.972471 | orchestrator | 2026-02-28 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:18.007404 | orchestrator | 2026-02-28 00:57:18 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:18.008235 | orchestrator | 2026-02-28 00:57:18 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:18.010727 | orchestrator | 2026-02-28 00:57:18 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:18.010898 | orchestrator | 2026-02-28 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:21.058066 | orchestrator | 2026-02-28 00:57:21 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:21.060725 | orchestrator | 2026-02-28 00:57:21 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:21.062294 | orchestrator | 2026-02-28 00:57:21 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:21.062422 | orchestrator | 2026-02-28 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:24.113293 | orchestrator | 2026-02-28 00:57:24 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:24.115110 | orchestrator | 2026-02-28 00:57:24 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:24.116655 | orchestrator | 2026-02-28 00:57:24 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:24.116694 | orchestrator | 2026-02-28 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:27.163560 | orchestrator | 2026-02-28 00:57:27 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:27.164679 | orchestrator | 2026-02-28 00:57:27 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:27.167891 | orchestrator | 2026-02-28 00:57:27 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:27.168173 | orchestrator | 2026-02-28 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:30.217975 | orchestrator | 2026-02-28 00:57:30 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:30.222759 | orchestrator | 2026-02-28 00:57:30 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:30.226099 | orchestrator | 2026-02-28 00:57:30 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:30.226206 | orchestrator | 2026-02-28 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:33.274229 | orchestrator | 2026-02-28 00:57:33 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:33.276358 | orchestrator | 2026-02-28 00:57:33 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:33.278374 | orchestrator | 2026-02-28 00:57:33 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:33.278441 | orchestrator | 2026-02-28 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:36.335890 | orchestrator | 2026-02-28 00:57:36 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:36.337924 | orchestrator | 2026-02-28 00:57:36 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:36.339295 | orchestrator | 2026-02-28 00:57:36 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:36.339370 | orchestrator | 2026-02-28 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:39.401120 | orchestrator | 2026-02-28 00:57:39 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:39.402128 | orchestrator | 2026-02-28 00:57:39 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:39.404762 | orchestrator | 2026-02-28 00:57:39 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:39.404916 | orchestrator | 2026-02-28 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:42.463934 | orchestrator | 2026-02-28 00:57:42 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:42.466949 | orchestrator | 2026-02-28 00:57:42 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:42.468945 | orchestrator | 2026-02-28 00:57:42 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:42.468993 | orchestrator | 2026-02-28 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:45.514651 | orchestrator | 2026-02-28 00:57:45 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:45.515662 | orchestrator | 2026-02-28 00:57:45 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:45.517689 | orchestrator | 2026-02-28 00:57:45 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:45.518150 | orchestrator | 2026-02-28 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:48.570107 | orchestrator | 2026-02-28 00:57:48 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:48.573431 | orchestrator | 2026-02-28 00:57:48 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:48.577886 | orchestrator | 2026-02-28 00:57:48 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:48.577931 | orchestrator | 2026-02-28 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:51.629996 | orchestrator | 2026-02-28 00:57:51 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:51.630505 | orchestrator | 2026-02-28 00:57:51 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:51.631578 | orchestrator | 2026-02-28 00:57:51 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:51.631694 | orchestrator | 2026-02-28 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:54.680671 | orchestrator | 2026-02-28 00:57:54 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:54.681845 | orchestrator | 2026-02-28 00:57:54 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:54.683831 | orchestrator | 2026-02-28 00:57:54 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:54.683875 | orchestrator | 2026-02-28 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:57.745034 | orchestrator | 2026-02-28 00:57:57 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:57:57.749926 | orchestrator | 2026-02-28 00:57:57 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:57:57.751508 | orchestrator | 2026-02-28 00:57:57 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:57:57.751565 | orchestrator | 2026-02-28 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:00.802280 | orchestrator | 2026-02-28 00:58:00 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:00.805191 | orchestrator | 2026-02-28 00:58:00 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:58:00.808748 | orchestrator | 2026-02-28 00:58:00 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:00.808810 | orchestrator | 2026-02-28 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:03.860495 | orchestrator | 2026-02-28 00:58:03 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:03.862634 | orchestrator | 2026-02-28 00:58:03 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:58:03.864361 | orchestrator | 2026-02-28 00:58:03 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:03.864437 | orchestrator | 2026-02-28 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:06.910269 | orchestrator | 2026-02-28 00:58:06 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:06.911777 | orchestrator | 2026-02-28 00:58:06 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:58:06.913482 | orchestrator | 2026-02-28 00:58:06 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:06.913661 | orchestrator | 2026-02-28 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:09.953793 | orchestrator | 2026-02-28 00:58:09 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:09.954539 | orchestrator | 2026-02-28 00:58:09 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:58:09.956464 | orchestrator | 2026-02-28 00:58:09 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:09.956516 | orchestrator | 2026-02-28 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:13.002124 | orchestrator | 2026-02-28 00:58:13 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:13.002682 | orchestrator | 2026-02-28 00:58:13 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:58:13.004011 | orchestrator | 2026-02-28 00:58:13 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:13.004052 | orchestrator | 2026-02-28 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:16.046862 | orchestrator | 2026-02-28 00:58:16 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:16.047329 | orchestrator | 2026-02-28 00:58:16 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:58:16.048455 | orchestrator | 2026-02-28 00:58:16 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:16.048510 | orchestrator | 2026-02-28 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:19.082999 | orchestrator | 2026-02-28 00:58:19 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:19.084968 | orchestrator | 2026-02-28 00:58:19 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:58:19.086565 | orchestrator | 2026-02-28 00:58:19 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:19.086620 | orchestrator | 2026-02-28 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:22.122092 | orchestrator | 2026-02-28 00:58:22 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:22.123300 | orchestrator | 2026-02-28 00:58:22 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:58:22.124871 | orchestrator | 2026-02-28 00:58:22 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:22.124918 | orchestrator | 2026-02-28 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:25.169159 | orchestrator | 2026-02-28 00:58:25 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:25.170442 | orchestrator | 2026-02-28 00:58:25 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:58:25.171841 | orchestrator | 2026-02-28 00:58:25 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:25.171885 | orchestrator | 2026-02-28 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:28.229170 | orchestrator | 2026-02-28 00:58:28 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:28.233173 | orchestrator | 2026-02-28 00:58:28 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:58:28.236259 | orchestrator | 2026-02-28 00:58:28 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:28.236723 | orchestrator | 2026-02-28 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:31.298731 | orchestrator | 2026-02-28 00:58:31 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:31.300911 | orchestrator | 2026-02-28 00:58:31 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:58:31.303148 | orchestrator | 2026-02-28 00:58:31 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:31.303183 | orchestrator | 2026-02-28 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:34.366845 | orchestrator | 2026-02-28 00:58:34 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:34.368855 | orchestrator | 2026-02-28 00:58:34 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state STARTED 2026-02-28 00:58:34.371241 | orchestrator | 2026-02-28 00:58:34 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:34.371303 | orchestrator | 2026-02-28 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:37.429889 | orchestrator | 2026-02-28 00:58:37 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:58:37.430731 | orchestrator | 2026-02-28 00:58:37 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:37.437716 | orchestrator | 2026-02-28 00:58:37 | INFO  | Task c71a7c9f-4e6d-4f12-aaba-b0ce04927c81 is in state SUCCESS 2026-02-28 00:58:37.440058 | orchestrator | 2026-02-28 00:58:37.440113 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-28 00:58:37.440121 | orchestrator | 2.16.14 2026-02-28 00:58:37.440127 | orchestrator | 2026-02-28 00:58:37.440132 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-28 00:58:37.440138 | orchestrator | 2026-02-28 00:58:37.440143 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-28 00:58:37.440148 | orchestrator | Saturday 28 February 2026 00:46:38 +0000 (0:00:00.796) 0:00:00.796 ***** 2026-02-28 00:58:37.440154 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.440160 | orchestrator | 2026-02-28 00:58:37.440165 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-28 00:58:37.440170 | orchestrator | Saturday 28 February 2026 00:46:39 +0000 (0:00:01.076) 0:00:01.873 ***** 2026-02-28 00:58:37.440174 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.440179 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.440184 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.440188 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.440193 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.440198 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.440205 | orchestrator | 2026-02-28 00:58:37.440212 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-28 00:58:37.440219 | orchestrator | Saturday 28 February 2026 00:46:41 +0000 (0:00:01.645) 0:00:03.518 ***** 2026-02-28 00:58:37.440226 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.440232 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.440239 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.440246 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.440253 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.440261 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.440267 | orchestrator | 2026-02-28 00:58:37.440272 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-28 00:58:37.440277 | orchestrator | Saturday 28 February 2026 00:46:41 +0000 (0:00:00.706) 0:00:04.225 ***** 2026-02-28 00:58:37.440281 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.440286 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.440290 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.440295 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.440299 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.440304 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.440309 | orchestrator | 2026-02-28 00:58:37.440313 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-28 00:58:37.440318 | orchestrator | Saturday 28 February 2026 00:46:42 +0000 (0:00:00.997) 0:00:05.222 ***** 2026-02-28 00:58:37.440323 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.440328 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.440332 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.440337 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.440343 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.440351 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.440358 | orchestrator | 2026-02-28 00:58:37.440366 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-28 00:58:37.440373 | orchestrator | Saturday 28 February 2026 00:46:43 +0000 (0:00:00.682) 0:00:05.905 ***** 2026-02-28 00:58:37.440380 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.440387 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.440394 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.440402 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.440409 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.440416 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.440423 | orchestrator | 2026-02-28 00:58:37.440430 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-28 00:58:37.440459 | orchestrator | Saturday 28 February 2026 00:46:44 +0000 (0:00:00.660) 0:00:06.565 ***** 2026-02-28 00:58:37.440467 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.440474 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.440482 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.440489 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.440497 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.440504 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.440511 | orchestrator | 2026-02-28 00:58:37.440518 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-28 00:58:37.440526 | orchestrator | Saturday 28 February 2026 00:46:45 +0000 (0:00:00.862) 0:00:07.428 ***** 2026-02-28 00:58:37.440533 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.440542 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.440550 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.440559 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.440567 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.440574 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.440582 | orchestrator | 2026-02-28 00:58:37.440588 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-28 00:58:37.440596 | orchestrator | Saturday 28 February 2026 00:46:45 +0000 (0:00:00.788) 0:00:08.216 ***** 2026-02-28 00:58:37.440603 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.440658 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.440665 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.440673 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.440681 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.440690 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.440698 | orchestrator | 2026-02-28 00:58:37.440705 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-28 00:58:37.440715 | orchestrator | Saturday 28 February 2026 00:46:46 +0000 (0:00:01.021) 0:00:09.237 ***** 2026-02-28 00:58:37.440723 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 00:58:37.440732 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:58:37.440756 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:58:37.440764 | orchestrator | 2026-02-28 00:58:37.440771 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-28 00:58:37.440778 | orchestrator | Saturday 28 February 2026 00:46:47 +0000 (0:00:00.601) 0:00:09.839 ***** 2026-02-28 00:58:37.440786 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.440794 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.440801 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.440824 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.440834 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.440841 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.440849 | orchestrator | 2026-02-28 00:58:37.440856 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-28 00:58:37.440864 | orchestrator | Saturday 28 February 2026 00:46:48 +0000 (0:00:01.089) 0:00:10.929 ***** 2026-02-28 00:58:37.440872 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 00:58:37.440880 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:58:37.440887 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:58:37.440895 | orchestrator | 2026-02-28 00:58:37.440903 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-28 00:58:37.440910 | orchestrator | Saturday 28 February 2026 00:46:50 +0000 (0:00:02.343) 0:00:13.273 ***** 2026-02-28 00:58:37.440918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-28 00:58:37.441034 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-28 00:58:37.441044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-28 00:58:37.441064 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.441072 | orchestrator | 2026-02-28 00:58:37.441081 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-28 00:58:37.441089 | orchestrator | Saturday 28 February 2026 00:46:51 +0000 (0:00:00.931) 0:00:14.204 ***** 2026-02-28 00:58:37.441101 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.441113 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.441122 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.441130 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.441139 | orchestrator | 2026-02-28 00:58:37.441147 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-28 00:58:37.441156 | orchestrator | Saturday 28 February 2026 00:46:52 +0000 (0:00:00.962) 0:00:15.166 ***** 2026-02-28 00:58:37.441167 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.441227 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.441237 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.441245 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.441254 | orchestrator | 2026-02-28 00:58:37.441263 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-28 00:58:37.441272 | orchestrator | Saturday 28 February 2026 00:46:53 +0000 (0:00:00.321) 0:00:15.487 ***** 2026-02-28 00:58:37.441297 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-28 00:46:49.242425', 'end': '2026-02-28 00:46:49.333233', 'delta': '0:00:00.090808', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.441309 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-28 00:46:49.951151', 'end': '2026-02-28 00:46:50.061549', 'delta': '0:00:00.110398', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.441323 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-28 00:46:50.605157', 'end': '2026-02-28 00:46:50.704108', 'delta': '0:00:00.098951', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.441331 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.441340 | orchestrator | 2026-02-28 00:58:37.441348 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-28 00:58:37.441356 | orchestrator | Saturday 28 February 2026 00:46:53 +0000 (0:00:00.156) 0:00:15.644 ***** 2026-02-28 00:58:37.441364 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.441371 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.441379 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.441386 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.441394 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.441400 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.441408 | orchestrator | 2026-02-28 00:58:37.441415 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-28 00:58:37.441423 | orchestrator | Saturday 28 February 2026 00:46:55 +0000 (0:00:01.954) 0:00:17.598 ***** 2026-02-28 00:58:37.441431 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:37.441439 | orchestrator | 2026-02-28 00:58:37.441446 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-28 00:58:37.441454 | orchestrator | Saturday 28 February 2026 00:46:56 +0000 (0:00:00.892) 0:00:18.491 ***** 2026-02-28 00:58:37.441462 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.441469 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.441477 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.441484 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.441492 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.441500 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.441507 | orchestrator | 2026-02-28 00:58:37.441515 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-28 00:58:37.441522 | orchestrator | Saturday 28 February 2026 00:46:57 +0000 (0:00:01.341) 0:00:19.833 ***** 2026-02-28 00:58:37.441530 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.441537 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.441545 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.441553 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.441560 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.441568 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.441576 | orchestrator | 2026-02-28 00:58:37.441584 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-28 00:58:37.441591 | orchestrator | Saturday 28 February 2026 00:46:59 +0000 (0:00:01.657) 0:00:21.491 ***** 2026-02-28 00:58:37.441599 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.441633 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.441646 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.441655 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.441662 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.441669 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.441688 | orchestrator | 2026-02-28 00:58:37.441696 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-28 00:58:37.441704 | orchestrator | Saturday 28 February 2026 00:47:00 +0000 (0:00:01.700) 0:00:23.191 ***** 2026-02-28 00:58:37.441712 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.441719 | orchestrator | 2026-02-28 00:58:37.441727 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-28 00:58:37.441735 | orchestrator | Saturday 28 February 2026 00:47:00 +0000 (0:00:00.222) 0:00:23.414 ***** 2026-02-28 00:58:37.441743 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.441751 | orchestrator | 2026-02-28 00:58:37.441800 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-28 00:58:37.441811 | orchestrator | Saturday 28 February 2026 00:47:01 +0000 (0:00:00.296) 0:00:23.710 ***** 2026-02-28 00:58:37.441819 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.441828 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.441836 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.441850 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.441893 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.441903 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.441912 | orchestrator | 2026-02-28 00:58:37.441920 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-28 00:58:37.441927 | orchestrator | Saturday 28 February 2026 00:47:01 +0000 (0:00:00.627) 0:00:24.338 ***** 2026-02-28 00:58:37.441935 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.441943 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.441952 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.442001 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.442011 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.442147 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.442156 | orchestrator | 2026-02-28 00:58:37.442164 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-28 00:58:37.442172 | orchestrator | Saturday 28 February 2026 00:47:03 +0000 (0:00:01.901) 0:00:26.240 ***** 2026-02-28 00:58:37.442180 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.442189 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.442197 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.442205 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.442214 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.442222 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.442230 | orchestrator | 2026-02-28 00:58:37.442239 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-28 00:58:37.442247 | orchestrator | Saturday 28 February 2026 00:47:04 +0000 (0:00:00.845) 0:00:27.086 ***** 2026-02-28 00:58:37.442256 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.442264 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.442272 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.442281 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.442290 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.442298 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.442306 | orchestrator | 2026-02-28 00:58:37.442315 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-28 00:58:37.442322 | orchestrator | Saturday 28 February 2026 00:47:06 +0000 (0:00:01.370) 0:00:28.456 ***** 2026-02-28 00:58:37.442330 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.442338 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.442361 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.442368 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.442376 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.442391 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.442420 | orchestrator | 2026-02-28 00:58:37.442428 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-28 00:58:37.442435 | orchestrator | Saturday 28 February 2026 00:47:06 +0000 (0:00:00.876) 0:00:29.333 ***** 2026-02-28 00:58:37.442444 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.442451 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.442483 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.442493 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.442502 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.442510 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.442518 | orchestrator | 2026-02-28 00:58:37.442546 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-28 00:58:37.442557 | orchestrator | Saturday 28 February 2026 00:47:07 +0000 (0:00:00.720) 0:00:30.053 ***** 2026-02-28 00:58:37.442564 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.442572 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.442580 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.442587 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.442595 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.442603 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.442715 | orchestrator | 2026-02-28 00:58:37.442723 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-28 00:58:37.442731 | orchestrator | Saturday 28 February 2026 00:47:08 +0000 (0:00:00.859) 0:00:30.913 ***** 2026-02-28 00:58:37.442741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--867868d0--bc68--54b2--8c81--3bd5cfa2d741-osd--block--867868d0--bc68--54b2--8c81--3bd5cfa2d741', 'dm-uuid-LVM-WrHd1WBJwiIQu3wRvwi3oxAdU1uiYw1ssr0IlesLmubdqf3kJezjrYiXv7hinTbv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee950762--4564--5222--9e83--52313bf46222-osd--block--ee950762--4564--5222--9e83--52313bf46222', 'dm-uuid-LVM-DRo8KROozWdchoWkEV0I4rKCTeGe3CFfLwy1dNIrGyGq95SlnpSl29pQ5dp0XaOO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part1', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part14', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part15', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part16', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.442889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--867868d0--bc68--54b2--8c81--3bd5cfa2d741-osd--block--867868d0--bc68--54b2--8c81--3bd5cfa2d741'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WtB1d6-sWNv-YURM-qg2z-wil5-81PB-JSzAr1', 'scsi-0QEMU_QEMU_HARDDISK_5a576e70-544d-44fd-a16d-0d3a23dfbf81', 'scsi-SQEMU_QEMU_HARDDISK_5a576e70-544d-44fd-a16d-0d3a23dfbf81'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.442898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ee950762--4564--5222--9e83--52313bf46222-osd--block--ee950762--4564--5222--9e83--52313bf46222'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HybjrJ-CMl1-aoR8-mAan-oGuh-aofN-X0035i', 'scsi-0QEMU_QEMU_HARDDISK_72888dc0-89fa-4d82-a9e9-f7d921f86abf', 'scsi-SQEMU_QEMU_HARDDISK_72888dc0-89fa-4d82-a9e9-f7d921f86abf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.442907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a83ed65-2ee8-47d4-9c51-9fbd7e5801de', 'scsi-SQEMU_QEMU_HARDDISK_7a83ed65-2ee8-47d4-9c51-9fbd7e5801de'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.442920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.442929 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.442942 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b073c23--7edc--573a--a84d--7267a4d3e426-osd--block--7b073c23--7edc--573a--a84d--7267a4d3e426', 'dm-uuid-LVM-ChISMhrkERnHZXWTu7s4Cf5VESYs0hDb5tiIHlQZ9NK3ixFV4FP9QLT1mPFzXBoD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b30b5faa--3070--5965--91f3--7d8dbacf19e9-osd--block--b30b5faa--3070--5965--91f3--7d8dbacf19e9', 'dm-uuid-LVM-MfhbHtjX1HzbaRtp6rlyWUuLSmVUMDv8D7nAKzldfMH2GcPlpwjDnIA26Y3Y2LmK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.442993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part1', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part14', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part15', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part16', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443020 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7b073c23--7edc--573a--a84d--7267a4d3e426-osd--block--7b073c23--7edc--573a--a84d--7267a4d3e426'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-19GMjF-P3yp-G5GE-42b5-lyDa-MHK0-ctbrGm', 'scsi-0QEMU_QEMU_HARDDISK_0e1c50ba-f800-4f3f-b273-e42be7614723', 'scsi-SQEMU_QEMU_HARDDISK_0e1c50ba-f800-4f3f-b273-e42be7614723'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b30b5faa--3070--5965--91f3--7d8dbacf19e9-osd--block--b30b5faa--3070--5965--91f3--7d8dbacf19e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FmiwOs-YAtO-YgEO-v5qO-7EK3-xq1V-dgAbZr', 'scsi-0QEMU_QEMU_HARDDISK_e9cea570-02f9-4492-a688-e95ec43126f4', 'scsi-SQEMU_QEMU_HARDDISK_e9cea570-02f9-4492-a688-e95ec43126f4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc966f00-bd76-481b-987a-91131c9d0b5a', 'scsi-SQEMU_QEMU_HARDDISK_dc966f00-bd76-481b-987a-91131c9d0b5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f012bc14--1358--5d7b--888e--596399f0a0b7-osd--block--f012bc14--1358--5d7b--888e--596399f0a0b7', 'dm-uuid-LVM-kce2OSWfgnJq6VvT8pSnf5sYedgDQOSKm1UikoTeCnBPfXdH7wmGnVieltB6N3Ts'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de70aebc--f344--5246--8655--326adc55aaa0-osd--block--de70aebc--f344--5246--8655--326adc55aaa0', 'dm-uuid-LVM-7x6LJedXGNfAgbfF9zeovIMmS7m8AIY1vwDV3zTQmeQ3rXVdMCyFMDJlZcKtQVfD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443082 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.443092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443134 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36', 'scsi-SQEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part1', 'scsi-SQEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part14', 'scsi-SQEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part15', 'scsi-SQEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part16', 'scsi-SQEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f012bc14--1358--5d7b--888e--596399f0a0b7-osd--block--f012bc14--1358--5d7b--888e--596399f0a0b7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iM0Bp3-uSx6-9x09-KOmn-NAd7-OJqA-7Ip2Ie', 'scsi-0QEMU_QEMU_HARDDISK_2e8d751a-8ce2-4e55-9ca5-cbc1ced8bcd0', 'scsi-SQEMU_QEMU_HARDDISK_2e8d751a-8ce2-4e55-9ca5-cbc1ced8bcd0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--de70aebc--f344--5246--8655--326adc55aaa0-osd--block--de70aebc--f344--5246--8655--326adc55aaa0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HufsDV-CZn4-olxe-xxSc-cpo2-QLxi-4vdiWp', 'scsi-0QEMU_QEMU_HARDDISK_35a5842a-f5e3-41cd-9ad4-9887af65562b', 'scsi-SQEMU_QEMU_HARDDISK_35a5842a-f5e3-41cd-9ad4-9887af65562b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_794d6bd6-cdc9-465f-9345-dcdc45cdec57', 'scsi-SQEMU_QEMU_HARDDISK_794d6bd6-cdc9-465f-9345-dcdc45cdec57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443399 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.443407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443474 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.443481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b', 'scsi-SQEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part1', 'scsi-SQEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part14', 'scsi-SQEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part15', 'scsi-SQEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part16', 'scsi-SQEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443537 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.443546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:37.443646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811', 'scsi-SQEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part1', 'scsi-SQEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part14', 'scsi-SQEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part15', 'scsi-SQEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part16', 'scsi-SQEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:37.443677 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.443686 | orchestrator | 2026-02-28 00:58:37.443694 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-28 00:58:37.443703 | orchestrator | Saturday 28 February 2026 00:47:10 +0000 (0:00:01.709) 0:00:32.622 ***** 2026-02-28 00:58:37.443712 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--867868d0--bc68--54b2--8c81--3bd5cfa2d741-osd--block--867868d0--bc68--54b2--8c81--3bd5cfa2d741', 'dm-uuid-LVM-WrHd1WBJwiIQu3wRvwi3oxAdU1uiYw1ssr0IlesLmubdqf3kJezjrYiXv7hinTbv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.443721 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee950762--4564--5222--9e83--52313bf46222-osd--block--ee950762--4564--5222--9e83--52313bf46222', 'dm-uuid-LVM-DRo8KROozWdchoWkEV0I4rKCTeGe3CFfLwy1dNIrGyGq95SlnpSl29pQ5dp0XaOO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.443729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.443742 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.443754 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.443861 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.443873 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.443881 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.443942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.443951 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.443966 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b073c23--7edc--573a--a84d--7267a4d3e426-osd--block--7b073c23--7edc--573a--a84d--7267a4d3e426', 'dm-uuid-LVM-ChISMhrkERnHZXWTu7s4Cf5VESYs0hDb5tiIHlQZ9NK3ixFV4FP9QLT1mPFzXBoD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.443987 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part1', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part14', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part15', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part16', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.443998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--867868d0--bc68--54b2--8c81--3bd5cfa2d741-osd--block--867868d0--bc68--54b2--8c81--3bd5cfa2d741'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WtB1d6-sWNv-YURM-qg2z-wil5-81PB-JSzAr1', 'scsi-0QEMU_QEMU_HARDDISK_5a576e70-544d-44fd-a16d-0d3a23dfbf81', 'scsi-SQEMU_QEMU_HARDDISK_5a576e70-544d-44fd-a16d-0d3a23dfbf81'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444012 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ee950762--4564--5222--9e83--52313bf46222-osd--block--ee950762--4564--5222--9e83--52313bf46222'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HybjrJ-CMl1-aoR8-mAan-oGuh-aofN-X0035i', 'scsi-0QEMU_QEMU_HARDDISK_72888dc0-89fa-4d82-a9e9-f7d921f86abf', 'scsi-SQEMU_QEMU_HARDDISK_72888dc0-89fa-4d82-a9e9-f7d921f86abf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444096 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b30b5faa--3070--5965--91f3--7d8dbacf19e9-osd--block--b30b5faa--3070--5965--91f3--7d8dbacf19e9', 'dm-uuid-LVM-MfhbHtjX1HzbaRtp6rlyWUuLSmVUMDv8D7nAKzldfMH2GcPlpwjDnIA26Y3Y2LmK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444110 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a83ed65-2ee8-47d4-9c51-9fbd7e5801de', 'scsi-SQEMU_QEMU_HARDDISK_7a83ed65-2ee8-47d4-9c51-9fbd7e5801de'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444119 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444125 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444134 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.444139 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444149 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444163 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444171 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444178 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444186 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f012bc14--1358--5d7b--888e--596399f0a0b7-osd--block--f012bc14--1358--5d7b--888e--596399f0a0b7', 'dm-uuid-LVM-kce2OSWfgnJq6VvT8pSnf5sYedgDQOSKm1UikoTeCnBPfXdH7wmGnVieltB6N3Ts'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444227 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444236 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de70aebc--f344--5246--8655--326adc55aaa0-osd--block--de70aebc--f344--5246--8655--326adc55aaa0', 'dm-uuid-LVM-7x6LJedXGNfAgbfF9zeovIMmS7m8AIY1vwDV3zTQmeQ3rXVdMCyFMDJlZcKtQVfD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444252 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444261 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444268 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part1', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part14', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part15', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part16', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444314 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444328 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444334 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444338 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444343 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444352 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444357 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444369 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7b073c23--7edc--573a--a84d--7267a4d3e426-osd--block--7b073c23--7edc--573a--a84d--7267a4d3e426'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-19GMjF-P3yp-G5GE-42b5-lyDa-MHK0-ctbrGm', 'scsi-0QEMU_QEMU_HARDDISK_0e1c50ba-f800-4f3f-b273-e42be7614723', 'scsi-SQEMU_QEMU_HARDDISK_0e1c50ba-f800-4f3f-b273-e42be7614723'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444374 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444378 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444387 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444392 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444397 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444404 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444413 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b30b5faa--3070--5965--91f3--7d8dbacf19e9-osd--block--b30b5faa--3070--5965--91f3--7d8dbacf19e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FmiwOs-YAtO-YgEO-v5qO-7EK3-xq1V-dgAbZr', 'scsi-0QEMU_QEMU_HARDDISK_e9cea570-02f9-4492-a688-e95ec43126f4', 'scsi-SQEMU_QEMU_HARDDISK_e9cea570-02f9-4492-a688-e95ec43126f4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444418 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444426 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444431 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444436 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444443 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444452 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444457 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444461 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc966f00-bd76-481b-987a-91131c9d0b5a', 'scsi-SQEMU_QEMU_HARDDISK_dc966f00-bd76-481b-987a-91131c9d0b5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444475 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444485 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444490 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444498 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b', 'scsi-SQEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part1', 'scsi-SQEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part14', 'scsi-SQEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part15', 'scsi-SQEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part16', 'scsi-SQEMU_QEMU_HARDDISK_fad726d2-031e-4d2a-a9ae-f431162b566b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444897 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f012bc14--1358--5d7b--888e--596399f0a0b7-osd--block--f012bc14--1358--5d7b--888e--596399f0a0b7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iM0Bp3-uSx6-9x09-KOmn-NAd7-OJqA-7Ip2Ie', 'scsi-0QEMU_QEMU_HARDDISK_2e8d751a-8ce2-4e55-9ca5-cbc1ced8bcd0', 'scsi-SQEMU_QEMU_HARDDISK_2e8d751a-8ce2-4e55-9ca5-cbc1ced8bcd0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444922 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444940 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444947 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.444955 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.444963 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--de70aebc--f344--5246--8655--326adc55aaa0-osd--block--de70aebc--f344--5246--8655--326adc55aaa0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HufsDV-CZn4-olxe-xxSc-cpo2-QLxi-4vdiWp', 'scsi-0QEMU_QEMU_HARDDISK_35a5842a-f5e3-41cd-9ad4-9887af65562b', 'scsi-SQEMU_QEMU_HARDDISK_35a5842a-f5e3-41cd-9ad4-9887af65562b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.445197 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36', 'scsi-SQEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part1', 'scsi-SQEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part14', 'scsi-SQEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part15', 'scsi-SQEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part16', 'scsi-SQEMU_QEMU_HARDDISK_fdf124ea-0529-4dc9-b27a-d5265c98bb36-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.445227 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.445232 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.445237 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_794d6bd6-cdc9-465f-9345-dcdc45cdec57', 'scsi-SQEMU_QEMU_HARDDISK_794d6bd6-cdc9-465f-9345-dcdc45cdec57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.445250 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.445255 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.445260 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.445268 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.445273 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.445278 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.445283 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.445287 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.445292 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.445303 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.445307 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.445316 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811', 'scsi-SQEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part1', 'scsi-SQEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part14', 'scsi-SQEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part15', 'scsi-SQEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part16', 'scsi-SQEMU_QEMU_HARDDISK_9bcd01d8-dc60-46f2-8431-43e53714b811-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.445324 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:37.445329 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.445334 | orchestrator | 2026-02-28 00:58:37.445341 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-28 00:58:37.445347 | orchestrator | Saturday 28 February 2026 00:47:11 +0000 (0:00:01.721) 0:00:34.344 ***** 2026-02-28 00:58:37.445351 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.445359 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.445363 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.445368 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.445372 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.445376 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.445381 | orchestrator | 2026-02-28 00:58:37.445385 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-28 00:58:37.445390 | orchestrator | Saturday 28 February 2026 00:47:13 +0000 (0:00:01.919) 0:00:36.263 ***** 2026-02-28 00:58:37.445394 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.445398 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.445403 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.445407 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.445412 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.445416 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.445420 | orchestrator | 2026-02-28 00:58:37.445425 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-28 00:58:37.445429 | orchestrator | Saturday 28 February 2026 00:47:14 +0000 (0:00:00.832) 0:00:37.096 ***** 2026-02-28 00:58:37.445433 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.445438 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.445442 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.445447 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.445451 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.445455 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.445460 | orchestrator | 2026-02-28 00:58:37.445464 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-28 00:58:37.445469 | orchestrator | Saturday 28 February 2026 00:47:15 +0000 (0:00:01.082) 0:00:38.179 ***** 2026-02-28 00:58:37.445473 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.445477 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.445482 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.445487 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.445491 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.445496 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.445501 | orchestrator | 2026-02-28 00:58:37.445506 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-28 00:58:37.445511 | orchestrator | Saturday 28 February 2026 00:47:16 +0000 (0:00:01.045) 0:00:39.224 ***** 2026-02-28 00:58:37.445516 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.445521 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.445526 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.445531 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.445536 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.445541 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.445546 | orchestrator | 2026-02-28 00:58:37.445552 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-28 00:58:37.445557 | orchestrator | Saturday 28 February 2026 00:47:18 +0000 (0:00:01.278) 0:00:40.503 ***** 2026-02-28 00:58:37.445563 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.445571 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.445577 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.445583 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.445595 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.445648 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.445657 | orchestrator | 2026-02-28 00:58:37.445664 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-28 00:58:37.445671 | orchestrator | Saturday 28 February 2026 00:47:19 +0000 (0:00:01.489) 0:00:41.992 ***** 2026-02-28 00:58:37.445678 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-28 00:58:37.445685 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-28 00:58:37.445692 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-28 00:58:37.445699 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-28 00:58:37.445712 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-28 00:58:37.445720 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-28 00:58:37.445727 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-28 00:58:37.445734 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-28 00:58:37.445741 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-28 00:58:37.445749 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-28 00:58:37.445757 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-28 00:58:37.445765 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-28 00:58:37.445773 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-28 00:58:37.445779 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-28 00:58:37.445784 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-28 00:58:37.445788 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-28 00:58:37.445793 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-28 00:58:37.445798 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-28 00:58:37.445803 | orchestrator | 2026-02-28 00:58:37.445809 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-28 00:58:37.445814 | orchestrator | Saturday 28 February 2026 00:47:24 +0000 (0:00:04.670) 0:00:46.663 ***** 2026-02-28 00:58:37.445819 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-28 00:58:37.445824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-28 00:58:37.445829 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-28 00:58:37.445834 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.445842 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-28 00:58:37.445847 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-28 00:58:37.445853 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-28 00:58:37.445858 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.445863 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-28 00:58:37.445872 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-28 00:58:37.445877 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-28 00:58:37.445882 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.445887 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:58:37.445891 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:58:37.445895 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:58:37.445900 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.445904 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-28 00:58:37.445908 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-28 00:58:37.445912 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-28 00:58:37.445917 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.445921 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-28 00:58:37.445925 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-28 00:58:37.445930 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-28 00:58:37.445934 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.445938 | orchestrator | 2026-02-28 00:58:37.445943 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-28 00:58:37.445947 | orchestrator | Saturday 28 February 2026 00:47:25 +0000 (0:00:01.069) 0:00:47.733 ***** 2026-02-28 00:58:37.445951 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.445955 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.445960 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.445965 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.445973 | orchestrator | 2026-02-28 00:58:37.445978 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-28 00:58:37.445984 | orchestrator | Saturday 28 February 2026 00:47:27 +0000 (0:00:01.849) 0:00:49.582 ***** 2026-02-28 00:58:37.445988 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.445993 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.445997 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.446001 | orchestrator | 2026-02-28 00:58:37.446006 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-28 00:58:37.446010 | orchestrator | Saturday 28 February 2026 00:47:27 +0000 (0:00:00.405) 0:00:49.987 ***** 2026-02-28 00:58:37.446050 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.446056 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.446060 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.446065 | orchestrator | 2026-02-28 00:58:37.446069 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-28 00:58:37.446074 | orchestrator | Saturday 28 February 2026 00:47:27 +0000 (0:00:00.312) 0:00:50.300 ***** 2026-02-28 00:58:37.446078 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.446082 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.446087 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.446091 | orchestrator | 2026-02-28 00:58:37.446096 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-28 00:58:37.446102 | orchestrator | Saturday 28 February 2026 00:47:28 +0000 (0:00:00.844) 0:00:51.145 ***** 2026-02-28 00:58:37.446110 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.446117 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.446123 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.446130 | orchestrator | 2026-02-28 00:58:37.446136 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-28 00:58:37.446143 | orchestrator | Saturday 28 February 2026 00:47:29 +0000 (0:00:00.892) 0:00:52.037 ***** 2026-02-28 00:58:37.446150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:37.446157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:37.446164 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:37.446170 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.446177 | orchestrator | 2026-02-28 00:58:37.446184 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-28 00:58:37.446191 | orchestrator | Saturday 28 February 2026 00:47:30 +0000 (0:00:00.408) 0:00:52.446 ***** 2026-02-28 00:58:37.446198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:37.446205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:37.446213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:37.446220 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.446227 | orchestrator | 2026-02-28 00:58:37.446233 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-28 00:58:37.446238 | orchestrator | Saturday 28 February 2026 00:47:30 +0000 (0:00:00.569) 0:00:53.016 ***** 2026-02-28 00:58:37.446242 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:37.446246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:37.446251 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:37.446255 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.446259 | orchestrator | 2026-02-28 00:58:37.446264 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-28 00:58:37.446268 | orchestrator | Saturday 28 February 2026 00:47:31 +0000 (0:00:00.567) 0:00:53.583 ***** 2026-02-28 00:58:37.446272 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.446277 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.446285 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.446294 | orchestrator | 2026-02-28 00:58:37.446299 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-28 00:58:37.446303 | orchestrator | Saturday 28 February 2026 00:47:31 +0000 (0:00:00.357) 0:00:53.941 ***** 2026-02-28 00:58:37.446308 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-28 00:58:37.446312 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-28 00:58:37.446321 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-28 00:58:37.446325 | orchestrator | 2026-02-28 00:58:37.446330 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-28 00:58:37.446334 | orchestrator | Saturday 28 February 2026 00:47:32 +0000 (0:00:01.435) 0:00:55.376 ***** 2026-02-28 00:58:37.446339 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 00:58:37.446344 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:58:37.446348 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:58:37.446352 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-28 00:58:37.446357 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-28 00:58:37.446362 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-28 00:58:37.446366 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-28 00:58:37.446371 | orchestrator | 2026-02-28 00:58:37.446375 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-28 00:58:37.446379 | orchestrator | Saturday 28 February 2026 00:47:34 +0000 (0:00:01.515) 0:00:56.892 ***** 2026-02-28 00:58:37.446384 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 00:58:37.446388 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:58:37.446393 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:58:37.446397 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-28 00:58:37.446402 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-28 00:58:37.446406 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-28 00:58:37.446410 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-28 00:58:37.446415 | orchestrator | 2026-02-28 00:58:37.446419 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:58:37.446423 | orchestrator | Saturday 28 February 2026 00:47:37 +0000 (0:00:02.722) 0:00:59.614 ***** 2026-02-28 00:58:37.446428 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.446434 | orchestrator | 2026-02-28 00:58:37.446439 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:58:37.446443 | orchestrator | Saturday 28 February 2026 00:47:38 +0000 (0:00:01.701) 0:01:01.316 ***** 2026-02-28 00:58:37.446447 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.446452 | orchestrator | 2026-02-28 00:58:37.446456 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:58:37.446461 | orchestrator | Saturday 28 February 2026 00:47:40 +0000 (0:00:01.275) 0:01:02.591 ***** 2026-02-28 00:58:37.446465 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.446469 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.446474 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.446478 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.446482 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.446490 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.446495 | orchestrator | 2026-02-28 00:58:37.446499 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:58:37.446503 | orchestrator | Saturday 28 February 2026 00:47:42 +0000 (0:00:01.831) 0:01:04.422 ***** 2026-02-28 00:58:37.446508 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.446512 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.446517 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.446521 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.446525 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.446529 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.446534 | orchestrator | 2026-02-28 00:58:37.446538 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:58:37.446542 | orchestrator | Saturday 28 February 2026 00:47:42 +0000 (0:00:00.991) 0:01:05.414 ***** 2026-02-28 00:58:37.446547 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.446551 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.446557 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.446564 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.446570 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.446576 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.446583 | orchestrator | 2026-02-28 00:58:37.446590 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:58:37.446597 | orchestrator | Saturday 28 February 2026 00:47:44 +0000 (0:00:01.301) 0:01:06.715 ***** 2026-02-28 00:58:37.446603 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.446649 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.446656 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.446664 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.446672 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.446680 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.446685 | orchestrator | 2026-02-28 00:58:37.446695 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:58:37.446700 | orchestrator | Saturday 28 February 2026 00:47:45 +0000 (0:00:00.796) 0:01:07.511 ***** 2026-02-28 00:58:37.446704 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.446709 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.446713 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.446717 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.446721 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.446735 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.446740 | orchestrator | 2026-02-28 00:58:37.446744 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:58:37.446749 | orchestrator | Saturday 28 February 2026 00:47:46 +0000 (0:00:01.507) 0:01:09.020 ***** 2026-02-28 00:58:37.446753 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.446757 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.446762 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.446766 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.446770 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.446775 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.446779 | orchestrator | 2026-02-28 00:58:37.446783 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:58:37.446788 | orchestrator | Saturday 28 February 2026 00:47:48 +0000 (0:00:01.905) 0:01:10.925 ***** 2026-02-28 00:58:37.446792 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.446796 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.446801 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.446805 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.446809 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.446814 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.446818 | orchestrator | 2026-02-28 00:58:37.446822 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:58:37.446827 | orchestrator | Saturday 28 February 2026 00:47:51 +0000 (0:00:02.704) 0:01:13.630 ***** 2026-02-28 00:58:37.446836 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.446840 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.446845 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.446852 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.446858 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.446865 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.446872 | orchestrator | 2026-02-28 00:58:37.446879 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:58:37.446886 | orchestrator | Saturday 28 February 2026 00:47:53 +0000 (0:00:02.268) 0:01:15.898 ***** 2026-02-28 00:58:37.446893 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.446900 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.446907 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.446913 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.446920 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.446927 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.446934 | orchestrator | 2026-02-28 00:58:37.446940 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:58:37.446947 | orchestrator | Saturday 28 February 2026 00:47:55 +0000 (0:00:01.605) 0:01:17.504 ***** 2026-02-28 00:58:37.446978 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.446986 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.446993 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.447000 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.447006 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.447011 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.447015 | orchestrator | 2026-02-28 00:58:37.447020 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:58:37.447024 | orchestrator | Saturday 28 February 2026 00:47:56 +0000 (0:00:01.090) 0:01:18.595 ***** 2026-02-28 00:58:37.447029 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.447033 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.447037 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.447042 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.447046 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.447050 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.447055 | orchestrator | 2026-02-28 00:58:37.447059 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:58:37.447063 | orchestrator | Saturday 28 February 2026 00:47:57 +0000 (0:00:01.747) 0:01:20.342 ***** 2026-02-28 00:58:37.447068 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.447072 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.447076 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.447080 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.447085 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.447089 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.447093 | orchestrator | 2026-02-28 00:58:37.447098 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:58:37.447102 | orchestrator | Saturday 28 February 2026 00:47:58 +0000 (0:00:00.936) 0:01:21.279 ***** 2026-02-28 00:58:37.447107 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.447111 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.447115 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.447120 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.447124 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.447128 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.447133 | orchestrator | 2026-02-28 00:58:37.447137 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:58:37.447141 | orchestrator | Saturday 28 February 2026 00:48:00 +0000 (0:00:01.311) 0:01:22.591 ***** 2026-02-28 00:58:37.447146 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.447150 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.447154 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.447159 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.447168 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.447172 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.447177 | orchestrator | 2026-02-28 00:58:37.447181 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:58:37.447186 | orchestrator | Saturday 28 February 2026 00:48:00 +0000 (0:00:00.771) 0:01:23.362 ***** 2026-02-28 00:58:37.447190 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.447194 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.447199 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.447203 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.447207 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.447212 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.447216 | orchestrator | 2026-02-28 00:58:37.447223 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:58:37.447228 | orchestrator | Saturday 28 February 2026 00:48:01 +0000 (0:00:01.016) 0:01:24.378 ***** 2026-02-28 00:58:37.447232 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.447237 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.447241 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.447245 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.447254 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.447259 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.447263 | orchestrator | 2026-02-28 00:58:37.447268 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:58:37.447272 | orchestrator | Saturday 28 February 2026 00:48:02 +0000 (0:00:01.012) 0:01:25.391 ***** 2026-02-28 00:58:37.447276 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.447281 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.447285 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.447289 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.447294 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.447298 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.447302 | orchestrator | 2026-02-28 00:58:37.447307 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:58:37.447311 | orchestrator | Saturday 28 February 2026 00:48:03 +0000 (0:00:01.020) 0:01:26.411 ***** 2026-02-28 00:58:37.447315 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.447320 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.447324 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.447328 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.447332 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.447337 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.447341 | orchestrator | 2026-02-28 00:58:37.447345 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:58:37.447350 | orchestrator | Saturday 28 February 2026 00:48:05 +0000 (0:00:01.061) 0:01:27.472 ***** 2026-02-28 00:58:37.447354 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.447358 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.447362 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.447367 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.447371 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.447375 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.447379 | orchestrator | 2026-02-28 00:58:37.447386 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-28 00:58:37.447393 | orchestrator | Saturday 28 February 2026 00:48:07 +0000 (0:00:02.524) 0:01:29.997 ***** 2026-02-28 00:58:37.447399 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.447407 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.447414 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.447421 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.447428 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.447449 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.447456 | orchestrator | 2026-02-28 00:58:37.447464 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-28 00:58:37.447469 | orchestrator | Saturday 28 February 2026 00:48:09 +0000 (0:00:02.100) 0:01:32.097 ***** 2026-02-28 00:58:37.447484 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.447489 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.447493 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.447498 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.447502 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.447507 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.447511 | orchestrator | 2026-02-28 00:58:37.447515 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-28 00:58:37.447520 | orchestrator | Saturday 28 February 2026 00:48:13 +0000 (0:00:03.991) 0:01:36.089 ***** 2026-02-28 00:58:37.447525 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.447530 | orchestrator | 2026-02-28 00:58:37.447534 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-28 00:58:37.447539 | orchestrator | Saturday 28 February 2026 00:48:15 +0000 (0:00:02.250) 0:01:38.339 ***** 2026-02-28 00:58:37.447543 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.447547 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.447552 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.447556 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.447560 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.447565 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.447569 | orchestrator | 2026-02-28 00:58:37.447574 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-28 00:58:37.447578 | orchestrator | Saturday 28 February 2026 00:48:17 +0000 (0:00:01.151) 0:01:39.491 ***** 2026-02-28 00:58:37.447582 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.447587 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.447591 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.447596 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.447600 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.447620 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.447626 | orchestrator | 2026-02-28 00:58:37.447630 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-28 00:58:37.447635 | orchestrator | Saturday 28 February 2026 00:48:18 +0000 (0:00:01.278) 0:01:40.769 ***** 2026-02-28 00:58:37.447639 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:58:37.447643 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:58:37.447647 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:58:37.447652 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:58:37.447656 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:58:37.447660 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:58:37.447665 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:58:37.447673 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:58:37.447677 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:58:37.447682 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:58:37.447691 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:58:37.447696 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:58:37.447700 | orchestrator | 2026-02-28 00:58:37.447705 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-28 00:58:37.447709 | orchestrator | Saturday 28 February 2026 00:48:19 +0000 (0:00:01.613) 0:01:42.382 ***** 2026-02-28 00:58:37.447718 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.447722 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.447727 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.447731 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.447735 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.447740 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.447744 | orchestrator | 2026-02-28 00:58:37.447749 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-28 00:58:37.447753 | orchestrator | Saturday 28 February 2026 00:48:21 +0000 (0:00:01.162) 0:01:43.544 ***** 2026-02-28 00:58:37.447757 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.447762 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.447766 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.447771 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.447775 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.447779 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.447784 | orchestrator | 2026-02-28 00:58:37.447788 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-28 00:58:37.447792 | orchestrator | Saturday 28 February 2026 00:48:21 +0000 (0:00:00.674) 0:01:44.219 ***** 2026-02-28 00:58:37.447797 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.447801 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.447805 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.447810 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.447814 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.447818 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.447823 | orchestrator | 2026-02-28 00:58:37.447827 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-28 00:58:37.447832 | orchestrator | Saturday 28 February 2026 00:48:22 +0000 (0:00:01.017) 0:01:45.236 ***** 2026-02-28 00:58:37.447836 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.447841 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.447845 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.447849 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.447854 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.447858 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.447862 | orchestrator | 2026-02-28 00:58:37.447867 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-28 00:58:37.447871 | orchestrator | Saturday 28 February 2026 00:48:23 +0000 (0:00:00.731) 0:01:45.967 ***** 2026-02-28 00:58:37.447876 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.447880 | orchestrator | 2026-02-28 00:58:37.447885 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-28 00:58:37.447889 | orchestrator | Saturday 28 February 2026 00:48:25 +0000 (0:00:01.485) 0:01:47.453 ***** 2026-02-28 00:58:37.447894 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.447898 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.447902 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.447907 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.447911 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.447915 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.447919 | orchestrator | 2026-02-28 00:58:37.447924 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-28 00:58:37.447928 | orchestrator | Saturday 28 February 2026 00:49:10 +0000 (0:00:45.068) 0:02:32.522 ***** 2026-02-28 00:58:37.447933 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:58:37.447937 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:58:37.447942 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:58:37.447946 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.447954 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:58:37.447958 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:58:37.447963 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:58:37.447967 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.447971 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:58:37.447976 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:58:37.447980 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:58:37.447985 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:58:37.447989 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:58:37.447993 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:58:37.447998 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.448002 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:58:37.448010 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:58:37.448014 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:58:37.448018 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.448023 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.448030 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:58:37.448035 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:58:37.448039 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:58:37.448044 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.448048 | orchestrator | 2026-02-28 00:58:37.448052 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-28 00:58:37.448057 | orchestrator | Saturday 28 February 2026 00:49:11 +0000 (0:00:00.924) 0:02:33.446 ***** 2026-02-28 00:58:37.448061 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.448066 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.448070 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.448075 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.448079 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.448083 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.448088 | orchestrator | 2026-02-28 00:58:37.448092 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-28 00:58:37.448097 | orchestrator | Saturday 28 February 2026 00:49:12 +0000 (0:00:00.977) 0:02:34.424 ***** 2026-02-28 00:58:37.448101 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.448105 | orchestrator | 2026-02-28 00:58:37.448110 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-28 00:58:37.448114 | orchestrator | Saturday 28 February 2026 00:49:12 +0000 (0:00:00.168) 0:02:34.592 ***** 2026-02-28 00:58:37.448119 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.448123 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.448127 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.448132 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.448136 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.448141 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.448145 | orchestrator | 2026-02-28 00:58:37.448149 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-28 00:58:37.448154 | orchestrator | Saturday 28 February 2026 00:49:13 +0000 (0:00:00.840) 0:02:35.433 ***** 2026-02-28 00:58:37.448158 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.448162 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.448167 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.448176 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.448180 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.448184 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.448189 | orchestrator | 2026-02-28 00:58:37.448193 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-28 00:58:37.448197 | orchestrator | Saturday 28 February 2026 00:49:13 +0000 (0:00:00.913) 0:02:36.347 ***** 2026-02-28 00:58:37.448202 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.448206 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.448210 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.448214 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.448219 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.448223 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.448227 | orchestrator | 2026-02-28 00:58:37.448232 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-28 00:58:37.448236 | orchestrator | Saturday 28 February 2026 00:49:14 +0000 (0:00:00.856) 0:02:37.203 ***** 2026-02-28 00:58:37.448240 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.448245 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.448249 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.448253 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.448258 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.448262 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.448266 | orchestrator | 2026-02-28 00:58:37.448271 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-28 00:58:37.448275 | orchestrator | Saturday 28 February 2026 00:49:18 +0000 (0:00:04.042) 0:02:41.246 ***** 2026-02-28 00:58:37.448279 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.448284 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.448288 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.448292 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.448297 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.448301 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.448305 | orchestrator | 2026-02-28 00:58:37.448309 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-28 00:58:37.448314 | orchestrator | Saturday 28 February 2026 00:49:19 +0000 (0:00:01.033) 0:02:42.279 ***** 2026-02-28 00:58:37.448319 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.448325 | orchestrator | 2026-02-28 00:58:37.448329 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-28 00:58:37.448333 | orchestrator | Saturday 28 February 2026 00:49:21 +0000 (0:00:01.837) 0:02:44.117 ***** 2026-02-28 00:58:37.448338 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.448342 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.448346 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.448351 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.448355 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.448359 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.448363 | orchestrator | 2026-02-28 00:58:37.448368 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-28 00:58:37.448372 | orchestrator | Saturday 28 February 2026 00:49:23 +0000 (0:00:01.509) 0:02:45.627 ***** 2026-02-28 00:58:37.448377 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.448381 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.448385 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.448389 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.448394 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.448401 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.448406 | orchestrator | 2026-02-28 00:58:37.448410 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-28 00:58:37.448415 | orchestrator | Saturday 28 February 2026 00:49:24 +0000 (0:00:01.074) 0:02:46.702 ***** 2026-02-28 00:58:37.448422 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.448426 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.448433 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.448438 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.448442 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.448447 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.448451 | orchestrator | 2026-02-28 00:58:37.448455 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-28 00:58:37.448460 | orchestrator | Saturday 28 February 2026 00:49:25 +0000 (0:00:01.126) 0:02:47.829 ***** 2026-02-28 00:58:37.448464 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.448469 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.448473 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.448477 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.448482 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.448486 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.448490 | orchestrator | 2026-02-28 00:58:37.448495 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-28 00:58:37.448499 | orchestrator | Saturday 28 February 2026 00:49:26 +0000 (0:00:01.126) 0:02:48.955 ***** 2026-02-28 00:58:37.448503 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.448508 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.448512 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.448516 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.448521 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.448525 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.448529 | orchestrator | 2026-02-28 00:58:37.448533 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-28 00:58:37.448538 | orchestrator | Saturday 28 February 2026 00:49:27 +0000 (0:00:01.125) 0:02:50.080 ***** 2026-02-28 00:58:37.448542 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.448547 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.448551 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.448555 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.448560 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.448564 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.448568 | orchestrator | 2026-02-28 00:58:37.448573 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-28 00:58:37.448577 | orchestrator | Saturday 28 February 2026 00:49:28 +0000 (0:00:00.661) 0:02:50.741 ***** 2026-02-28 00:58:37.448581 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.448586 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.448590 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.448594 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.448598 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.448603 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.448622 | orchestrator | 2026-02-28 00:58:37.448627 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-28 00:58:37.448631 | orchestrator | Saturday 28 February 2026 00:49:29 +0000 (0:00:01.039) 0:02:51.781 ***** 2026-02-28 00:58:37.448636 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.448640 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.448645 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.448649 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.448653 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.448657 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.448662 | orchestrator | 2026-02-28 00:58:37.448666 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-28 00:58:37.448671 | orchestrator | Saturday 28 February 2026 00:49:30 +0000 (0:00:00.684) 0:02:52.466 ***** 2026-02-28 00:58:37.448675 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.448680 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.448684 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.448692 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.448696 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.448700 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.448705 | orchestrator | 2026-02-28 00:58:37.448709 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-28 00:58:37.448713 | orchestrator | Saturday 28 February 2026 00:49:31 +0000 (0:00:01.654) 0:02:54.121 ***** 2026-02-28 00:58:37.448718 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.448722 | orchestrator | 2026-02-28 00:58:37.448727 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-28 00:58:37.448731 | orchestrator | Saturday 28 February 2026 00:49:32 +0000 (0:00:01.283) 0:02:55.404 ***** 2026-02-28 00:58:37.448735 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-28 00:58:37.448740 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-28 00:58:37.448744 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-28 00:58:37.448749 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-28 00:58:37.448753 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-28 00:58:37.448757 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-28 00:58:37.448762 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-28 00:58:37.448766 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-28 00:58:37.448770 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-28 00:58:37.448774 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-28 00:58:37.448779 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-28 00:58:37.448783 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-28 00:58:37.448787 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-28 00:58:37.448792 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-28 00:58:37.448799 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-28 00:58:37.448804 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-28 00:58:37.448808 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-28 00:58:37.448812 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-28 00:58:37.448820 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-28 00:58:37.448825 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-28 00:58:37.448829 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-28 00:58:37.448833 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-28 00:58:37.448838 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-28 00:58:37.448842 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-28 00:58:37.448846 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-28 00:58:37.448851 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-28 00:58:37.448855 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-28 00:58:37.448859 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-28 00:58:37.448863 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-28 00:58:37.448868 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-28 00:58:37.448872 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-28 00:58:37.448876 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-28 00:58:37.448881 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-28 00:58:37.448885 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-28 00:58:37.448889 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-28 00:58:37.448897 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-28 00:58:37.448902 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-28 00:58:37.448906 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-28 00:58:37.448910 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-28 00:58:37.448915 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-28 00:58:37.448919 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-28 00:58:37.448924 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:58:37.448928 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:58:37.448932 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:58:37.448936 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-28 00:58:37.448941 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:58:37.448945 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:58:37.448949 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:58:37.448954 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:58:37.448958 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:58:37.448962 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:58:37.448967 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:58:37.448971 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:58:37.448975 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:58:37.448980 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:58:37.448984 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:58:37.448988 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:58:37.448993 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:58:37.448997 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:58:37.449001 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:58:37.449005 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:58:37.449010 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:58:37.449014 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:58:37.449028 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:58:37.449033 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:58:37.449037 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:58:37.449042 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:58:37.449046 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:58:37.449050 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:58:37.449055 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:58:37.449059 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:58:37.449063 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:58:37.449068 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:58:37.449078 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:58:37.449083 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:58:37.449090 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:58:37.449097 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:58:37.449102 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:58:37.449106 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:58:37.449111 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:58:37.449115 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-28 00:58:37.449120 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:58:37.449124 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:58:37.449128 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:58:37.449133 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:58:37.449137 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-28 00:58:37.449142 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-28 00:58:37.449146 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-28 00:58:37.449151 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-28 00:58:37.449155 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-28 00:58:37.449159 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-28 00:58:37.449164 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-28 00:58:37.449168 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-28 00:58:37.449172 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-28 00:58:37.449176 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-28 00:58:37.449181 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-28 00:58:37.449185 | orchestrator | 2026-02-28 00:58:37.449190 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-28 00:58:37.449194 | orchestrator | Saturday 28 February 2026 00:49:39 +0000 (0:00:06.960) 0:03:02.365 ***** 2026-02-28 00:58:37.449199 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449203 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449208 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449213 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.449217 | orchestrator | 2026-02-28 00:58:37.449221 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-28 00:58:37.449226 | orchestrator | Saturday 28 February 2026 00:49:41 +0000 (0:00:01.215) 0:03:03.581 ***** 2026-02-28 00:58:37.449230 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:37.449235 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:37.449239 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:37.449244 | orchestrator | 2026-02-28 00:58:37.449248 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-28 00:58:37.449253 | orchestrator | Saturday 28 February 2026 00:49:42 +0000 (0:00:01.070) 0:03:04.651 ***** 2026-02-28 00:58:37.449257 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:37.449262 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:37.449266 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:37.449274 | orchestrator | 2026-02-28 00:58:37.449278 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-28 00:58:37.449283 | orchestrator | Saturday 28 February 2026 00:49:44 +0000 (0:00:01.774) 0:03:06.426 ***** 2026-02-28 00:58:37.449287 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.449291 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.449296 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.449300 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449305 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449309 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449313 | orchestrator | 2026-02-28 00:58:37.449318 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-28 00:58:37.449322 | orchestrator | Saturday 28 February 2026 00:49:44 +0000 (0:00:00.840) 0:03:07.266 ***** 2026-02-28 00:58:37.449327 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.449331 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.449336 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.449340 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449344 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449349 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449353 | orchestrator | 2026-02-28 00:58:37.449358 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-28 00:58:37.449362 | orchestrator | Saturday 28 February 2026 00:49:46 +0000 (0:00:01.210) 0:03:08.477 ***** 2026-02-28 00:58:37.449369 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.449374 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.449378 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.449383 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449387 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449391 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449396 | orchestrator | 2026-02-28 00:58:37.449403 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-28 00:58:37.449408 | orchestrator | Saturday 28 February 2026 00:49:46 +0000 (0:00:00.789) 0:03:09.267 ***** 2026-02-28 00:58:37.449412 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.449417 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.449421 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.449425 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449430 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449434 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449438 | orchestrator | 2026-02-28 00:58:37.449443 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-28 00:58:37.449447 | orchestrator | Saturday 28 February 2026 00:49:48 +0000 (0:00:01.248) 0:03:10.515 ***** 2026-02-28 00:58:37.449452 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.449456 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.449460 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.449465 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449469 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449473 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449478 | orchestrator | 2026-02-28 00:58:37.449482 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-28 00:58:37.449487 | orchestrator | Saturday 28 February 2026 00:49:48 +0000 (0:00:00.859) 0:03:11.375 ***** 2026-02-28 00:58:37.449491 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.449495 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.449500 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.449504 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449508 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449513 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449517 | orchestrator | 2026-02-28 00:58:37.449522 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-28 00:58:37.449529 | orchestrator | Saturday 28 February 2026 00:49:50 +0000 (0:00:01.448) 0:03:12.824 ***** 2026-02-28 00:58:37.449534 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.449538 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.449543 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449547 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.449551 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449556 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449560 | orchestrator | 2026-02-28 00:58:37.449565 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-28 00:58:37.449569 | orchestrator | Saturday 28 February 2026 00:49:51 +0000 (0:00:00.726) 0:03:13.550 ***** 2026-02-28 00:58:37.449574 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.449578 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.449582 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.449587 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449591 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449595 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449600 | orchestrator | 2026-02-28 00:58:37.449640 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-28 00:58:37.449647 | orchestrator | Saturday 28 February 2026 00:49:52 +0000 (0:00:01.225) 0:03:14.775 ***** 2026-02-28 00:58:37.449651 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449656 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449660 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449664 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.449669 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.449673 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.449678 | orchestrator | 2026-02-28 00:58:37.449682 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-28 00:58:37.449687 | orchestrator | Saturday 28 February 2026 00:49:56 +0000 (0:00:03.681) 0:03:18.457 ***** 2026-02-28 00:58:37.449691 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.449695 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.449700 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.449704 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449708 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449713 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449717 | orchestrator | 2026-02-28 00:58:37.449721 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-28 00:58:37.449726 | orchestrator | Saturday 28 February 2026 00:49:57 +0000 (0:00:01.256) 0:03:19.713 ***** 2026-02-28 00:58:37.449730 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.449734 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.449739 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449743 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.449747 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449752 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449757 | orchestrator | 2026-02-28 00:58:37.449761 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-28 00:58:37.449766 | orchestrator | Saturday 28 February 2026 00:49:58 +0000 (0:00:00.878) 0:03:20.592 ***** 2026-02-28 00:58:37.449770 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.449774 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.449778 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.449783 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449787 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449792 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449796 | orchestrator | 2026-02-28 00:58:37.449801 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-28 00:58:37.449805 | orchestrator | Saturday 28 February 2026 00:49:59 +0000 (0:00:01.283) 0:03:21.875 ***** 2026-02-28 00:58:37.449813 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:37.449821 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:37.449826 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449831 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:37.449839 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449843 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449848 | orchestrator | 2026-02-28 00:58:37.449852 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-28 00:58:37.449856 | orchestrator | Saturday 28 February 2026 00:50:00 +0000 (0:00:01.023) 0:03:22.899 ***** 2026-02-28 00:58:37.449862 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-28 00:58:37.449868 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-28 00:58:37.449874 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.449879 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-28 00:58:37.449883 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-28 00:58:37.449888 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-28 00:58:37.449892 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-28 00:58:37.449897 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.449901 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.449906 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449910 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449914 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449919 | orchestrator | 2026-02-28 00:58:37.449923 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-28 00:58:37.449928 | orchestrator | Saturday 28 February 2026 00:50:01 +0000 (0:00:01.361) 0:03:24.260 ***** 2026-02-28 00:58:37.449932 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.449936 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.449941 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.449945 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449949 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.449954 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449958 | orchestrator | 2026-02-28 00:58:37.449966 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-28 00:58:37.449970 | orchestrator | Saturday 28 February 2026 00:50:02 +0000 (0:00:00.980) 0:03:25.241 ***** 2026-02-28 00:58:37.449975 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.449979 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.449983 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.449988 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.449992 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.449997 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.450001 | orchestrator | 2026-02-28 00:58:37.450005 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-28 00:58:37.450010 | orchestrator | Saturday 28 February 2026 00:50:04 +0000 (0:00:01.304) 0:03:26.546 ***** 2026-02-28 00:58:37.450045 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450051 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.450056 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.450060 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.450065 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.450069 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.450074 | orchestrator | 2026-02-28 00:58:37.450078 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-28 00:58:37.450083 | orchestrator | Saturday 28 February 2026 00:50:04 +0000 (0:00:00.678) 0:03:27.224 ***** 2026-02-28 00:58:37.450087 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450095 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.450099 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.450104 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.450108 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.450112 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.450117 | orchestrator | 2026-02-28 00:58:37.450121 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-28 00:58:37.450129 | orchestrator | Saturday 28 February 2026 00:50:05 +0000 (0:00:00.775) 0:03:28.000 ***** 2026-02-28 00:58:37.450134 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450138 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.450143 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.450147 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.450151 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.450157 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.450164 | orchestrator | 2026-02-28 00:58:37.450173 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-28 00:58:37.450179 | orchestrator | Saturday 28 February 2026 00:50:06 +0000 (0:00:00.832) 0:03:28.832 ***** 2026-02-28 00:58:37.450186 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.450192 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.450199 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.450206 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.450213 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.450220 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.450227 | orchestrator | 2026-02-28 00:58:37.450234 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-28 00:58:37.450241 | orchestrator | Saturday 28 February 2026 00:50:07 +0000 (0:00:01.087) 0:03:29.920 ***** 2026-02-28 00:58:37.450249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:37.450254 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:37.450258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:37.450263 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450267 | orchestrator | 2026-02-28 00:58:37.450271 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-28 00:58:37.450276 | orchestrator | Saturday 28 February 2026 00:50:07 +0000 (0:00:00.383) 0:03:30.303 ***** 2026-02-28 00:58:37.450285 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:37.450289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:37.450293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:37.450298 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450302 | orchestrator | 2026-02-28 00:58:37.450307 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-28 00:58:37.450311 | orchestrator | Saturday 28 February 2026 00:50:08 +0000 (0:00:00.385) 0:03:30.689 ***** 2026-02-28 00:58:37.450315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:37.450320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:37.450324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:37.450328 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450333 | orchestrator | 2026-02-28 00:58:37.450337 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-28 00:58:37.450342 | orchestrator | Saturday 28 February 2026 00:50:08 +0000 (0:00:00.547) 0:03:31.236 ***** 2026-02-28 00:58:37.450346 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.450351 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.450355 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.450359 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.450364 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.450368 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.450373 | orchestrator | 2026-02-28 00:58:37.450377 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-28 00:58:37.450381 | orchestrator | Saturday 28 February 2026 00:50:09 +0000 (0:00:00.708) 0:03:31.945 ***** 2026-02-28 00:58:37.450386 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-28 00:58:37.450390 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-28 00:58:37.450395 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-28 00:58:37.450399 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-28 00:58:37.450404 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-28 00:58:37.450408 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.450412 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.450417 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-28 00:58:37.450421 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.450426 | orchestrator | 2026-02-28 00:58:37.450430 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-28 00:58:37.450434 | orchestrator | Saturday 28 February 2026 00:50:12 +0000 (0:00:02.476) 0:03:34.422 ***** 2026-02-28 00:58:37.450439 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.450443 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.450448 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.450452 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.450456 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.450461 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.450465 | orchestrator | 2026-02-28 00:58:37.450470 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:58:37.450474 | orchestrator | Saturday 28 February 2026 00:50:15 +0000 (0:00:03.513) 0:03:37.935 ***** 2026-02-28 00:58:37.450479 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.450483 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.450487 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.450492 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.450496 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.450501 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.450505 | orchestrator | 2026-02-28 00:58:37.450510 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-28 00:58:37.450514 | orchestrator | Saturday 28 February 2026 00:50:16 +0000 (0:00:01.465) 0:03:39.401 ***** 2026-02-28 00:58:37.450518 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450527 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.450534 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.450539 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.450544 | orchestrator | 2026-02-28 00:58:37.450548 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-28 00:58:37.450562 | orchestrator | Saturday 28 February 2026 00:50:18 +0000 (0:00:01.166) 0:03:40.568 ***** 2026-02-28 00:58:37.450567 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.450571 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.450576 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.450580 | orchestrator | 2026-02-28 00:58:37.450585 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-28 00:58:37.450589 | orchestrator | Saturday 28 February 2026 00:50:18 +0000 (0:00:00.382) 0:03:40.950 ***** 2026-02-28 00:58:37.450594 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.450598 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.450603 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.450622 | orchestrator | 2026-02-28 00:58:37.450627 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-28 00:58:37.450632 | orchestrator | Saturday 28 February 2026 00:50:19 +0000 (0:00:01.334) 0:03:42.285 ***** 2026-02-28 00:58:37.450636 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:58:37.450640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:58:37.450645 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:58:37.450649 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.450654 | orchestrator | 2026-02-28 00:58:37.450658 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-28 00:58:37.450662 | orchestrator | Saturday 28 February 2026 00:50:20 +0000 (0:00:00.963) 0:03:43.249 ***** 2026-02-28 00:58:37.450667 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.450671 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.450675 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.450679 | orchestrator | 2026-02-28 00:58:37.450684 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-28 00:58:37.450688 | orchestrator | Saturday 28 February 2026 00:50:21 +0000 (0:00:00.316) 0:03:43.565 ***** 2026-02-28 00:58:37.450692 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.450697 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.450701 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.450705 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.450710 | orchestrator | 2026-02-28 00:58:37.450714 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-28 00:58:37.450719 | orchestrator | Saturday 28 February 2026 00:50:22 +0000 (0:00:00.924) 0:03:44.490 ***** 2026-02-28 00:58:37.450723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:37.450727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:37.450732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:37.450736 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450740 | orchestrator | 2026-02-28 00:58:37.450745 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-28 00:58:37.450749 | orchestrator | Saturday 28 February 2026 00:50:22 +0000 (0:00:00.355) 0:03:44.845 ***** 2026-02-28 00:58:37.450753 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450758 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.450762 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.450766 | orchestrator | 2026-02-28 00:58:37.450771 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-28 00:58:37.450776 | orchestrator | Saturday 28 February 2026 00:50:22 +0000 (0:00:00.347) 0:03:45.193 ***** 2026-02-28 00:58:37.450784 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450789 | orchestrator | 2026-02-28 00:58:37.450794 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-28 00:58:37.450798 | orchestrator | Saturday 28 February 2026 00:50:22 +0000 (0:00:00.187) 0:03:45.380 ***** 2026-02-28 00:58:37.450803 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450807 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.450812 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.450817 | orchestrator | 2026-02-28 00:58:37.450821 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-28 00:58:37.450826 | orchestrator | Saturday 28 February 2026 00:50:23 +0000 (0:00:00.314) 0:03:45.694 ***** 2026-02-28 00:58:37.450830 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450835 | orchestrator | 2026-02-28 00:58:37.450840 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-28 00:58:37.450844 | orchestrator | Saturday 28 February 2026 00:50:23 +0000 (0:00:00.228) 0:03:45.922 ***** 2026-02-28 00:58:37.450849 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450853 | orchestrator | 2026-02-28 00:58:37.450858 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-28 00:58:37.450863 | orchestrator | Saturday 28 February 2026 00:50:23 +0000 (0:00:00.270) 0:03:46.193 ***** 2026-02-28 00:58:37.450868 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450872 | orchestrator | 2026-02-28 00:58:37.450877 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-28 00:58:37.450881 | orchestrator | Saturday 28 February 2026 00:50:23 +0000 (0:00:00.199) 0:03:46.392 ***** 2026-02-28 00:58:37.450886 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450890 | orchestrator | 2026-02-28 00:58:37.450895 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-28 00:58:37.450899 | orchestrator | Saturday 28 February 2026 00:50:24 +0000 (0:00:00.890) 0:03:47.283 ***** 2026-02-28 00:58:37.450904 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450909 | orchestrator | 2026-02-28 00:58:37.450913 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-28 00:58:37.450918 | orchestrator | Saturday 28 February 2026 00:50:25 +0000 (0:00:00.262) 0:03:47.545 ***** 2026-02-28 00:58:37.450922 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:37.450930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:37.450935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:37.450940 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450944 | orchestrator | 2026-02-28 00:58:37.450949 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-28 00:58:37.450958 | orchestrator | Saturday 28 February 2026 00:50:25 +0000 (0:00:00.489) 0:03:48.035 ***** 2026-02-28 00:58:37.450963 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450967 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.450972 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.450977 | orchestrator | 2026-02-28 00:58:37.450981 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-28 00:58:37.450986 | orchestrator | Saturday 28 February 2026 00:50:25 +0000 (0:00:00.381) 0:03:48.416 ***** 2026-02-28 00:58:37.450991 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.450995 | orchestrator | 2026-02-28 00:58:37.451000 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-28 00:58:37.451005 | orchestrator | Saturday 28 February 2026 00:50:26 +0000 (0:00:00.237) 0:03:48.653 ***** 2026-02-28 00:58:37.451009 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.451014 | orchestrator | 2026-02-28 00:58:37.451018 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-28 00:58:37.451023 | orchestrator | Saturday 28 February 2026 00:50:26 +0000 (0:00:00.233) 0:03:48.887 ***** 2026-02-28 00:58:37.451028 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.451036 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.451041 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.451045 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.451050 | orchestrator | 2026-02-28 00:58:37.451055 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-28 00:58:37.451060 | orchestrator | Saturday 28 February 2026 00:50:27 +0000 (0:00:01.237) 0:03:50.124 ***** 2026-02-28 00:58:37.451065 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.451069 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.451074 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.451079 | orchestrator | 2026-02-28 00:58:37.451083 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-28 00:58:37.451088 | orchestrator | Saturday 28 February 2026 00:50:28 +0000 (0:00:00.335) 0:03:50.460 ***** 2026-02-28 00:58:37.451093 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.451097 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.451102 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.451106 | orchestrator | 2026-02-28 00:58:37.451111 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-28 00:58:37.451116 | orchestrator | Saturday 28 February 2026 00:50:29 +0000 (0:00:01.218) 0:03:51.679 ***** 2026-02-28 00:58:37.451121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:37.451125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:37.451130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:37.451135 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.451139 | orchestrator | 2026-02-28 00:58:37.451144 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-28 00:58:37.451149 | orchestrator | Saturday 28 February 2026 00:50:29 +0000 (0:00:00.731) 0:03:52.411 ***** 2026-02-28 00:58:37.451153 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.451158 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.451163 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.451167 | orchestrator | 2026-02-28 00:58:37.451172 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-28 00:58:37.451177 | orchestrator | Saturday 28 February 2026 00:50:30 +0000 (0:00:00.468) 0:03:52.879 ***** 2026-02-28 00:58:37.451182 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.451186 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.451191 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.451196 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.451200 | orchestrator | 2026-02-28 00:58:37.451205 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-28 00:58:37.451210 | orchestrator | Saturday 28 February 2026 00:50:31 +0000 (0:00:00.776) 0:03:53.656 ***** 2026-02-28 00:58:37.451214 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.451219 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.451223 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.451228 | orchestrator | 2026-02-28 00:58:37.451233 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-28 00:58:37.451237 | orchestrator | Saturday 28 February 2026 00:50:31 +0000 (0:00:00.458) 0:03:54.115 ***** 2026-02-28 00:58:37.451242 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.451247 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.451251 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.451256 | orchestrator | 2026-02-28 00:58:37.451261 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-28 00:58:37.451265 | orchestrator | Saturday 28 February 2026 00:50:32 +0000 (0:00:01.268) 0:03:55.383 ***** 2026-02-28 00:58:37.451270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:37.451275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:37.451288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:37.451293 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.451297 | orchestrator | 2026-02-28 00:58:37.451302 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-28 00:58:37.451307 | orchestrator | Saturday 28 February 2026 00:50:33 +0000 (0:00:00.584) 0:03:55.968 ***** 2026-02-28 00:58:37.451311 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.451316 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.451320 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.451325 | orchestrator | 2026-02-28 00:58:37.451330 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-28 00:58:37.451337 | orchestrator | Saturday 28 February 2026 00:50:33 +0000 (0:00:00.336) 0:03:56.305 ***** 2026-02-28 00:58:37.451342 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.451346 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.451351 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.451356 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.451360 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.451368 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.451373 | orchestrator | 2026-02-28 00:58:37.451378 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-28 00:58:37.451382 | orchestrator | Saturday 28 February 2026 00:50:34 +0000 (0:00:00.818) 0:03:57.124 ***** 2026-02-28 00:58:37.451387 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.451391 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.451396 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.451401 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.451405 | orchestrator | 2026-02-28 00:58:37.451410 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-28 00:58:37.451415 | orchestrator | Saturday 28 February 2026 00:50:35 +0000 (0:00:00.735) 0:03:57.859 ***** 2026-02-28 00:58:37.451419 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.451424 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.451429 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.451433 | orchestrator | 2026-02-28 00:58:37.451438 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-28 00:58:37.451442 | orchestrator | Saturday 28 February 2026 00:50:35 +0000 (0:00:00.486) 0:03:58.345 ***** 2026-02-28 00:58:37.451447 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.451452 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.451457 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.451461 | orchestrator | 2026-02-28 00:58:37.451466 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-28 00:58:37.451471 | orchestrator | Saturday 28 February 2026 00:50:37 +0000 (0:00:01.338) 0:03:59.684 ***** 2026-02-28 00:58:37.451475 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:58:37.451480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:58:37.451485 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:58:37.451489 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.451494 | orchestrator | 2026-02-28 00:58:37.451498 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-28 00:58:37.451503 | orchestrator | Saturday 28 February 2026 00:50:38 +0000 (0:00:00.856) 0:04:00.540 ***** 2026-02-28 00:58:37.451507 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.451512 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.451517 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.451521 | orchestrator | 2026-02-28 00:58:37.451526 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-28 00:58:37.451531 | orchestrator | 2026-02-28 00:58:37.451536 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:58:37.451545 | orchestrator | Saturday 28 February 2026 00:50:38 +0000 (0:00:00.720) 0:04:01.261 ***** 2026-02-28 00:58:37.451549 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.451554 | orchestrator | 2026-02-28 00:58:37.451559 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:58:37.451564 | orchestrator | Saturday 28 February 2026 00:50:39 +0000 (0:00:00.913) 0:04:02.174 ***** 2026-02-28 00:58:37.451568 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.451573 | orchestrator | 2026-02-28 00:58:37.451578 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:58:37.451582 | orchestrator | Saturday 28 February 2026 00:50:40 +0000 (0:00:00.554) 0:04:02.729 ***** 2026-02-28 00:58:37.451587 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.451592 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.451596 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.451601 | orchestrator | 2026-02-28 00:58:37.451637 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:58:37.451643 | orchestrator | Saturday 28 February 2026 00:50:41 +0000 (0:00:01.185) 0:04:03.914 ***** 2026-02-28 00:58:37.451649 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.451657 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.451663 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.451671 | orchestrator | 2026-02-28 00:58:37.451677 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:58:37.451683 | orchestrator | Saturday 28 February 2026 00:50:41 +0000 (0:00:00.362) 0:04:04.277 ***** 2026-02-28 00:58:37.451690 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.451697 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.451704 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.451711 | orchestrator | 2026-02-28 00:58:37.451718 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:58:37.451725 | orchestrator | Saturday 28 February 2026 00:50:42 +0000 (0:00:00.337) 0:04:04.615 ***** 2026-02-28 00:58:37.451733 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.451741 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.451749 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.451757 | orchestrator | 2026-02-28 00:58:37.451764 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:58:37.451771 | orchestrator | Saturday 28 February 2026 00:50:42 +0000 (0:00:00.332) 0:04:04.947 ***** 2026-02-28 00:58:37.451780 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.451785 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.451789 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.451794 | orchestrator | 2026-02-28 00:58:37.451798 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:58:37.451803 | orchestrator | Saturday 28 February 2026 00:50:43 +0000 (0:00:01.395) 0:04:06.343 ***** 2026-02-28 00:58:37.451812 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.451817 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.451821 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.451826 | orchestrator | 2026-02-28 00:58:37.451831 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:58:37.451835 | orchestrator | Saturday 28 February 2026 00:50:44 +0000 (0:00:00.485) 0:04:06.828 ***** 2026-02-28 00:58:37.451844 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.451849 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.451854 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.451859 | orchestrator | 2026-02-28 00:58:37.451863 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:58:37.451868 | orchestrator | Saturday 28 February 2026 00:50:44 +0000 (0:00:00.440) 0:04:07.268 ***** 2026-02-28 00:58:37.451873 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.451882 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.451887 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.451892 | orchestrator | 2026-02-28 00:58:37.451897 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:58:37.451901 | orchestrator | Saturday 28 February 2026 00:50:45 +0000 (0:00:00.999) 0:04:08.268 ***** 2026-02-28 00:58:37.451906 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.451910 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.451915 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.451919 | orchestrator | 2026-02-28 00:58:37.451924 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:58:37.451929 | orchestrator | Saturday 28 February 2026 00:50:46 +0000 (0:00:01.141) 0:04:09.410 ***** 2026-02-28 00:58:37.451934 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.451938 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.451943 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.451948 | orchestrator | 2026-02-28 00:58:37.451952 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:58:37.451957 | orchestrator | Saturday 28 February 2026 00:50:47 +0000 (0:00:00.372) 0:04:09.783 ***** 2026-02-28 00:58:37.451961 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.451966 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.451971 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.451975 | orchestrator | 2026-02-28 00:58:37.451980 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:58:37.451984 | orchestrator | Saturday 28 February 2026 00:50:47 +0000 (0:00:00.414) 0:04:10.197 ***** 2026-02-28 00:58:37.451989 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.451994 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.451999 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.452003 | orchestrator | 2026-02-28 00:58:37.452008 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:58:37.452012 | orchestrator | Saturday 28 February 2026 00:50:48 +0000 (0:00:00.275) 0:04:10.473 ***** 2026-02-28 00:58:37.452017 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.452022 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.452026 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.452031 | orchestrator | 2026-02-28 00:58:37.452035 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:58:37.452040 | orchestrator | Saturday 28 February 2026 00:50:48 +0000 (0:00:00.292) 0:04:10.765 ***** 2026-02-28 00:58:37.452045 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.452049 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.452054 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.452058 | orchestrator | 2026-02-28 00:58:37.452063 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:58:37.452068 | orchestrator | Saturday 28 February 2026 00:50:48 +0000 (0:00:00.502) 0:04:11.268 ***** 2026-02-28 00:58:37.452072 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.452077 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.452082 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.452086 | orchestrator | 2026-02-28 00:58:37.452091 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:58:37.452095 | orchestrator | Saturday 28 February 2026 00:50:49 +0000 (0:00:00.350) 0:04:11.619 ***** 2026-02-28 00:58:37.452100 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.452105 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.452109 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.452114 | orchestrator | 2026-02-28 00:58:37.452119 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:58:37.452123 | orchestrator | Saturday 28 February 2026 00:50:49 +0000 (0:00:00.363) 0:04:11.982 ***** 2026-02-28 00:58:37.452128 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.452132 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.452142 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.452146 | orchestrator | 2026-02-28 00:58:37.452151 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:58:37.452156 | orchestrator | Saturday 28 February 2026 00:50:49 +0000 (0:00:00.379) 0:04:12.362 ***** 2026-02-28 00:58:37.452160 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.452165 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.452170 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.452174 | orchestrator | 2026-02-28 00:58:37.452179 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:58:37.452183 | orchestrator | Saturday 28 February 2026 00:50:50 +0000 (0:00:00.733) 0:04:13.096 ***** 2026-02-28 00:58:37.452188 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.452192 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.452197 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.452201 | orchestrator | 2026-02-28 00:58:37.452206 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-28 00:58:37.452211 | orchestrator | Saturday 28 February 2026 00:50:51 +0000 (0:00:00.565) 0:04:13.661 ***** 2026-02-28 00:58:37.452215 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.452220 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.452225 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.452229 | orchestrator | 2026-02-28 00:58:37.452234 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-28 00:58:37.452238 | orchestrator | Saturday 28 February 2026 00:50:51 +0000 (0:00:00.302) 0:04:13.964 ***** 2026-02-28 00:58:37.452245 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.452250 | orchestrator | 2026-02-28 00:58:37.452255 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-28 00:58:37.452260 | orchestrator | Saturday 28 February 2026 00:50:52 +0000 (0:00:00.907) 0:04:14.871 ***** 2026-02-28 00:58:37.452264 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.452269 | orchestrator | 2026-02-28 00:58:37.452277 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-28 00:58:37.452282 | orchestrator | Saturday 28 February 2026 00:50:52 +0000 (0:00:00.170) 0:04:15.042 ***** 2026-02-28 00:58:37.452287 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-28 00:58:37.452292 | orchestrator | 2026-02-28 00:58:37.452296 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-28 00:58:37.452301 | orchestrator | Saturday 28 February 2026 00:50:53 +0000 (0:00:01.254) 0:04:16.296 ***** 2026-02-28 00:58:37.452306 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.452310 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.452315 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.452319 | orchestrator | 2026-02-28 00:58:37.452324 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-28 00:58:37.452328 | orchestrator | Saturday 28 February 2026 00:50:54 +0000 (0:00:00.468) 0:04:16.765 ***** 2026-02-28 00:58:37.452333 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.452337 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.452342 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.452347 | orchestrator | 2026-02-28 00:58:37.452351 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-28 00:58:37.452356 | orchestrator | Saturday 28 February 2026 00:50:55 +0000 (0:00:00.688) 0:04:17.453 ***** 2026-02-28 00:58:37.452360 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.452365 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.452370 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.452374 | orchestrator | 2026-02-28 00:58:37.452379 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-28 00:58:37.452385 | orchestrator | Saturday 28 February 2026 00:50:56 +0000 (0:00:01.307) 0:04:18.761 ***** 2026-02-28 00:58:37.452392 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.452399 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.452412 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.452420 | orchestrator | 2026-02-28 00:58:37.452427 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-28 00:58:37.452434 | orchestrator | Saturday 28 February 2026 00:50:57 +0000 (0:00:00.981) 0:04:19.743 ***** 2026-02-28 00:58:37.452441 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.452448 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.452454 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.452462 | orchestrator | 2026-02-28 00:58:37.452470 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-28 00:58:37.452478 | orchestrator | Saturday 28 February 2026 00:50:58 +0000 (0:00:00.990) 0:04:20.733 ***** 2026-02-28 00:58:37.452486 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.452494 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.452502 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.452509 | orchestrator | 2026-02-28 00:58:37.452518 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-28 00:58:37.452523 | orchestrator | Saturday 28 February 2026 00:50:59 +0000 (0:00:00.903) 0:04:21.636 ***** 2026-02-28 00:58:37.452527 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.452532 | orchestrator | 2026-02-28 00:58:37.452536 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-28 00:58:37.452541 | orchestrator | Saturday 28 February 2026 00:51:01 +0000 (0:00:02.555) 0:04:24.192 ***** 2026-02-28 00:58:37.452546 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.452550 | orchestrator | 2026-02-28 00:58:37.452555 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-28 00:58:37.452559 | orchestrator | Saturday 28 February 2026 00:51:02 +0000 (0:00:01.042) 0:04:25.235 ***** 2026-02-28 00:58:37.452564 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:58:37.452569 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:37.452573 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:37.452578 | orchestrator | changed: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 00:58:37.452583 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 00:58:37.452588 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-28 00:58:37.452592 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-28 00:58:37.452597 | orchestrator | changed: [testbed-node-2 -> {{ item }}] 2026-02-28 00:58:37.452602 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 00:58:37.452625 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-28 00:58:37.452630 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 00:58:37.452634 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-28 00:58:37.452639 | orchestrator | 2026-02-28 00:58:37.452644 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-28 00:58:37.452648 | orchestrator | Saturday 28 February 2026 00:51:07 +0000 (0:00:04.997) 0:04:30.232 ***** 2026-02-28 00:58:37.452653 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.452658 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.452662 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.452667 | orchestrator | 2026-02-28 00:58:37.452672 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-28 00:58:37.452677 | orchestrator | Saturday 28 February 2026 00:51:10 +0000 (0:00:02.758) 0:04:32.990 ***** 2026-02-28 00:58:37.452682 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.452687 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.452692 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.452697 | orchestrator | 2026-02-28 00:58:37.452703 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-28 00:58:37.452708 | orchestrator | Saturday 28 February 2026 00:51:11 +0000 (0:00:00.552) 0:04:33.543 ***** 2026-02-28 00:58:37.452718 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.452728 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.452733 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.452738 | orchestrator | 2026-02-28 00:58:37.452744 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-28 00:58:37.452749 | orchestrator | Saturday 28 February 2026 00:51:12 +0000 (0:00:00.959) 0:04:34.502 ***** 2026-02-28 00:58:37.452759 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.452765 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.452770 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.452775 | orchestrator | 2026-02-28 00:58:37.452781 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-28 00:58:37.452786 | orchestrator | Saturday 28 February 2026 00:51:14 +0000 (0:00:02.811) 0:04:37.313 ***** 2026-02-28 00:58:37.452791 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.452796 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.452801 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.452806 | orchestrator | 2026-02-28 00:58:37.452812 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-28 00:58:37.452817 | orchestrator | Saturday 28 February 2026 00:51:17 +0000 (0:00:02.908) 0:04:40.222 ***** 2026-02-28 00:58:37.452822 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.452827 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.452832 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.452837 | orchestrator | 2026-02-28 00:58:37.452843 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-28 00:58:37.452848 | orchestrator | Saturday 28 February 2026 00:51:18 +0000 (0:00:00.547) 0:04:40.769 ***** 2026-02-28 00:58:37.452853 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.452858 | orchestrator | 2026-02-28 00:58:37.452864 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-28 00:58:37.452869 | orchestrator | Saturday 28 February 2026 00:51:19 +0000 (0:00:01.449) 0:04:42.219 ***** 2026-02-28 00:58:37.452874 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.452879 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.452885 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.452890 | orchestrator | 2026-02-28 00:58:37.452895 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-28 00:58:37.452900 | orchestrator | Saturday 28 February 2026 00:51:20 +0000 (0:00:00.434) 0:04:42.653 ***** 2026-02-28 00:58:37.452906 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.452911 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.452916 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.452921 | orchestrator | 2026-02-28 00:58:37.452926 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-28 00:58:37.452932 | orchestrator | Saturday 28 February 2026 00:51:20 +0000 (0:00:00.425) 0:04:43.079 ***** 2026-02-28 00:58:37.452937 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.452942 | orchestrator | 2026-02-28 00:58:37.452948 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-28 00:58:37.452953 | orchestrator | Saturday 28 February 2026 00:51:21 +0000 (0:00:01.109) 0:04:44.188 ***** 2026-02-28 00:58:37.452958 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.452963 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.452968 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.452974 | orchestrator | 2026-02-28 00:58:37.452979 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-28 00:58:37.452984 | orchestrator | Saturday 28 February 2026 00:51:24 +0000 (0:00:02.947) 0:04:47.136 ***** 2026-02-28 00:58:37.452989 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.452994 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.452999 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.453008 | orchestrator | 2026-02-28 00:58:37.453013 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-28 00:58:37.453019 | orchestrator | Saturday 28 February 2026 00:51:26 +0000 (0:00:01.391) 0:04:48.528 ***** 2026-02-28 00:58:37.453024 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.453029 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.453034 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.453039 | orchestrator | 2026-02-28 00:58:37.453044 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-28 00:58:37.453050 | orchestrator | Saturday 28 February 2026 00:51:27 +0000 (0:00:01.882) 0:04:50.410 ***** 2026-02-28 00:58:37.453055 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.453060 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.453065 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.453070 | orchestrator | 2026-02-28 00:58:37.453076 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-28 00:58:37.453081 | orchestrator | Saturday 28 February 2026 00:51:30 +0000 (0:00:02.615) 0:04:53.026 ***** 2026-02-28 00:58:37.453086 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.453091 | orchestrator | 2026-02-28 00:58:37.453097 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-28 00:58:37.453102 | orchestrator | Saturday 28 February 2026 00:51:31 +0000 (0:00:01.092) 0:04:54.119 ***** 2026-02-28 00:58:37.453107 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-28 00:58:37.453113 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.453118 | orchestrator | 2026-02-28 00:58:37.453123 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-28 00:58:37.453128 | orchestrator | Saturday 28 February 2026 00:51:53 +0000 (0:00:22.008) 0:05:16.127 ***** 2026-02-28 00:58:37.453134 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.453139 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.453144 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.453149 | orchestrator | 2026-02-28 00:58:37.453155 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-28 00:58:37.453163 | orchestrator | Saturday 28 February 2026 00:52:03 +0000 (0:00:09.422) 0:05:25.550 ***** 2026-02-28 00:58:37.453169 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.453174 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.453179 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.453184 | orchestrator | 2026-02-28 00:58:37.453190 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-28 00:58:37.453199 | orchestrator | Saturday 28 February 2026 00:52:03 +0000 (0:00:00.428) 0:05:25.978 ***** 2026-02-28 00:58:37.453206 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__60da267d7c4363389afcc13cb41232a82e2e585b'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-28 00:58:37.453213 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__60da267d7c4363389afcc13cb41232a82e2e585b'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-28 00:58:37.453220 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__60da267d7c4363389afcc13cb41232a82e2e585b'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-28 00:58:37.453232 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__60da267d7c4363389afcc13cb41232a82e2e585b'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-28 00:58:37.453238 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__60da267d7c4363389afcc13cb41232a82e2e585b'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-28 00:58:37.453245 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__60da267d7c4363389afcc13cb41232a82e2e585b'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__60da267d7c4363389afcc13cb41232a82e2e585b'}])  2026-02-28 00:58:37.453252 | orchestrator | 2026-02-28 00:58:37.453257 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:58:37.453262 | orchestrator | Saturday 28 February 2026 00:52:18 +0000 (0:00:15.093) 0:05:41.072 ***** 2026-02-28 00:58:37.453267 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.453273 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.453278 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.453283 | orchestrator | 2026-02-28 00:58:37.453288 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-28 00:58:37.453294 | orchestrator | Saturday 28 February 2026 00:52:19 +0000 (0:00:00.450) 0:05:41.522 ***** 2026-02-28 00:58:37.453299 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.453304 | orchestrator | 2026-02-28 00:58:37.453310 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-28 00:58:37.453315 | orchestrator | Saturday 28 February 2026 00:52:20 +0000 (0:00:01.067) 0:05:42.589 ***** 2026-02-28 00:58:37.453320 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.453326 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.453331 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.453336 | orchestrator | 2026-02-28 00:58:37.453341 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-28 00:58:37.453347 | orchestrator | Saturday 28 February 2026 00:52:20 +0000 (0:00:00.482) 0:05:43.071 ***** 2026-02-28 00:58:37.453352 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.453357 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.453362 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.453368 | orchestrator | 2026-02-28 00:58:37.453373 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-28 00:58:37.453378 | orchestrator | Saturday 28 February 2026 00:52:21 +0000 (0:00:00.453) 0:05:43.525 ***** 2026-02-28 00:58:37.453384 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:58:37.453389 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:58:37.453398 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:58:37.453403 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.453408 | orchestrator | 2026-02-28 00:58:37.453414 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-28 00:58:37.453419 | orchestrator | Saturday 28 February 2026 00:52:22 +0000 (0:00:01.131) 0:05:44.657 ***** 2026-02-28 00:58:37.453424 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.453432 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.453438 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.453448 | orchestrator | 2026-02-28 00:58:37.453453 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-28 00:58:37.453458 | orchestrator | 2026-02-28 00:58:37.453464 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:58:37.453469 | orchestrator | Saturday 28 February 2026 00:52:23 +0000 (0:00:01.105) 0:05:45.763 ***** 2026-02-28 00:58:37.453475 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.453480 | orchestrator | 2026-02-28 00:58:37.453486 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:58:37.453491 | orchestrator | Saturday 28 February 2026 00:52:24 +0000 (0:00:00.878) 0:05:46.642 ***** 2026-02-28 00:58:37.453496 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.453501 | orchestrator | 2026-02-28 00:58:37.453507 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:58:37.453512 | orchestrator | Saturday 28 February 2026 00:52:25 +0000 (0:00:00.929) 0:05:47.571 ***** 2026-02-28 00:58:37.453517 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.453522 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.453527 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.453533 | orchestrator | 2026-02-28 00:58:37.453538 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:58:37.453543 | orchestrator | Saturday 28 February 2026 00:52:25 +0000 (0:00:00.729) 0:05:48.300 ***** 2026-02-28 00:58:37.453548 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.453553 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.453559 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.453564 | orchestrator | 2026-02-28 00:58:37.453569 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:58:37.453574 | orchestrator | Saturday 28 February 2026 00:52:26 +0000 (0:00:00.342) 0:05:48.643 ***** 2026-02-28 00:58:37.453580 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.453585 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.453590 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.453595 | orchestrator | 2026-02-28 00:58:37.453600 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:58:37.453623 | orchestrator | Saturday 28 February 2026 00:52:26 +0000 (0:00:00.610) 0:05:49.254 ***** 2026-02-28 00:58:37.453629 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.453634 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.453639 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.453644 | orchestrator | 2026-02-28 00:58:37.453650 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:58:37.453655 | orchestrator | Saturday 28 February 2026 00:52:27 +0000 (0:00:00.337) 0:05:49.591 ***** 2026-02-28 00:58:37.453661 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.453666 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.453671 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.453677 | orchestrator | 2026-02-28 00:58:37.453682 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:58:37.453687 | orchestrator | Saturday 28 February 2026 00:52:27 +0000 (0:00:00.742) 0:05:50.334 ***** 2026-02-28 00:58:37.453692 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.453697 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.453703 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.453708 | orchestrator | 2026-02-28 00:58:37.453713 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:58:37.453718 | orchestrator | Saturday 28 February 2026 00:52:28 +0000 (0:00:00.387) 0:05:50.722 ***** 2026-02-28 00:58:37.453723 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.453729 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.453734 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.453739 | orchestrator | 2026-02-28 00:58:37.453751 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:58:37.453756 | orchestrator | Saturday 28 February 2026 00:52:29 +0000 (0:00:00.779) 0:05:51.502 ***** 2026-02-28 00:58:37.453761 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.453766 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.453771 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.453777 | orchestrator | 2026-02-28 00:58:37.453782 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:58:37.453787 | orchestrator | Saturday 28 February 2026 00:52:29 +0000 (0:00:00.841) 0:05:52.343 ***** 2026-02-28 00:58:37.453793 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.453798 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.453803 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.453808 | orchestrator | 2026-02-28 00:58:37.453813 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:58:37.453819 | orchestrator | Saturday 28 February 2026 00:52:30 +0000 (0:00:01.053) 0:05:53.396 ***** 2026-02-28 00:58:37.453824 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.453829 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.453834 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.453839 | orchestrator | 2026-02-28 00:58:37.453844 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:58:37.453849 | orchestrator | Saturday 28 February 2026 00:52:31 +0000 (0:00:00.470) 0:05:53.867 ***** 2026-02-28 00:58:37.453855 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.453860 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.453865 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.453870 | orchestrator | 2026-02-28 00:58:37.453876 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:58:37.453884 | orchestrator | Saturday 28 February 2026 00:52:32 +0000 (0:00:00.736) 0:05:54.604 ***** 2026-02-28 00:58:37.453890 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.453895 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.453900 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.453905 | orchestrator | 2026-02-28 00:58:37.453911 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:58:37.453919 | orchestrator | Saturday 28 February 2026 00:52:32 +0000 (0:00:00.333) 0:05:54.937 ***** 2026-02-28 00:58:37.453925 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.453930 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.453935 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.453940 | orchestrator | 2026-02-28 00:58:37.453946 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:58:37.453951 | orchestrator | Saturday 28 February 2026 00:52:32 +0000 (0:00:00.346) 0:05:55.284 ***** 2026-02-28 00:58:37.453956 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.453962 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.453967 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.453972 | orchestrator | 2026-02-28 00:58:37.453977 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:58:37.453983 | orchestrator | Saturday 28 February 2026 00:52:33 +0000 (0:00:00.326) 0:05:55.610 ***** 2026-02-28 00:58:37.453988 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.453993 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.453998 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.454003 | orchestrator | 2026-02-28 00:58:37.454009 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:58:37.454095 | orchestrator | Saturday 28 February 2026 00:52:33 +0000 (0:00:00.549) 0:05:56.159 ***** 2026-02-28 00:58:37.454103 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.454108 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.454113 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.454118 | orchestrator | 2026-02-28 00:58:37.454123 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:58:37.454133 | orchestrator | Saturday 28 February 2026 00:52:34 +0000 (0:00:00.736) 0:05:56.896 ***** 2026-02-28 00:58:37.454138 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.454143 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.454149 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.454154 | orchestrator | 2026-02-28 00:58:37.454159 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:58:37.454164 | orchestrator | Saturday 28 February 2026 00:52:34 +0000 (0:00:00.439) 0:05:57.336 ***** 2026-02-28 00:58:37.454169 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.454174 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.454180 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.454185 | orchestrator | 2026-02-28 00:58:37.454190 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:58:37.454195 | orchestrator | Saturday 28 February 2026 00:52:35 +0000 (0:00:00.625) 0:05:57.961 ***** 2026-02-28 00:58:37.454200 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.454206 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.454211 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.454216 | orchestrator | 2026-02-28 00:58:37.454221 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-28 00:58:37.454227 | orchestrator | Saturday 28 February 2026 00:52:36 +0000 (0:00:00.880) 0:05:58.842 ***** 2026-02-28 00:58:37.454232 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-28 00:58:37.454237 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:58:37.454243 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:58:37.454248 | orchestrator | 2026-02-28 00:58:37.454253 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-28 00:58:37.454258 | orchestrator | Saturday 28 February 2026 00:52:37 +0000 (0:00:00.770) 0:05:59.613 ***** 2026-02-28 00:58:37.454263 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.454269 | orchestrator | 2026-02-28 00:58:37.454274 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-28 00:58:37.454279 | orchestrator | Saturday 28 February 2026 00:52:37 +0000 (0:00:00.640) 0:06:00.253 ***** 2026-02-28 00:58:37.454284 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.454290 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.454295 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.454300 | orchestrator | 2026-02-28 00:58:37.454305 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-28 00:58:37.454310 | orchestrator | Saturday 28 February 2026 00:52:38 +0000 (0:00:01.049) 0:06:01.302 ***** 2026-02-28 00:58:37.454316 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.454321 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.454326 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.454331 | orchestrator | 2026-02-28 00:58:37.454336 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-28 00:58:37.454342 | orchestrator | Saturday 28 February 2026 00:52:39 +0000 (0:00:00.695) 0:06:01.998 ***** 2026-02-28 00:58:37.454347 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:58:37.454352 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:58:37.454357 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:58:37.454362 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-28 00:58:37.454368 | orchestrator | 2026-02-28 00:58:37.454373 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-28 00:58:37.454378 | orchestrator | Saturday 28 February 2026 00:52:50 +0000 (0:00:10.631) 0:06:12.630 ***** 2026-02-28 00:58:37.454384 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.454389 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.454394 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.454399 | orchestrator | 2026-02-28 00:58:37.454409 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-28 00:58:37.454414 | orchestrator | Saturday 28 February 2026 00:52:50 +0000 (0:00:00.386) 0:06:13.017 ***** 2026-02-28 00:58:37.454423 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-28 00:58:37.454429 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-28 00:58:37.454434 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-28 00:58:37.454439 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-28 00:58:37.454444 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:37.454469 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:37.454475 | orchestrator | 2026-02-28 00:58:37.454480 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-28 00:58:37.454486 | orchestrator | Saturday 28 February 2026 00:52:52 +0000 (0:00:02.336) 0:06:15.354 ***** 2026-02-28 00:58:37.454493 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-28 00:58:37.454502 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-28 00:58:37.454511 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-28 00:58:37.454518 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:58:37.454526 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-28 00:58:37.454533 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-28 00:58:37.454541 | orchestrator | 2026-02-28 00:58:37.454549 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-28 00:58:37.454557 | orchestrator | Saturday 28 February 2026 00:52:54 +0000 (0:00:01.354) 0:06:16.709 ***** 2026-02-28 00:58:37.454565 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.454573 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.454581 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.454590 | orchestrator | 2026-02-28 00:58:37.454598 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-28 00:58:37.454626 | orchestrator | Saturday 28 February 2026 00:52:55 +0000 (0:00:01.242) 0:06:17.951 ***** 2026-02-28 00:58:37.454636 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.454645 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.454650 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.454655 | orchestrator | 2026-02-28 00:58:37.454661 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-28 00:58:37.454666 | orchestrator | Saturday 28 February 2026 00:52:55 +0000 (0:00:00.316) 0:06:18.268 ***** 2026-02-28 00:58:37.454671 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.454676 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.454682 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.454687 | orchestrator | 2026-02-28 00:58:37.454692 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-28 00:58:37.454697 | orchestrator | Saturday 28 February 2026 00:52:56 +0000 (0:00:00.325) 0:06:18.593 ***** 2026-02-28 00:58:37.454703 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-02-28 00:58:37.454708 | orchestrator | 2026-02-28 00:58:37.454713 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-28 00:58:37.454718 | orchestrator | Saturday 28 February 2026 00:52:57 +0000 (0:00:00.962) 0:06:19.556 ***** 2026-02-28 00:58:37.454724 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.454729 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.454734 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.454739 | orchestrator | 2026-02-28 00:58:37.454745 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-28 00:58:37.454750 | orchestrator | Saturday 28 February 2026 00:52:57 +0000 (0:00:00.372) 0:06:19.928 ***** 2026-02-28 00:58:37.454755 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.454760 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.454765 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.454776 | orchestrator | 2026-02-28 00:58:37.454781 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-28 00:58:37.454786 | orchestrator | Saturday 28 February 2026 00:52:57 +0000 (0:00:00.374) 0:06:20.303 ***** 2026-02-28 00:58:37.454792 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.454797 | orchestrator | 2026-02-28 00:58:37.454802 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-28 00:58:37.454807 | orchestrator | Saturday 28 February 2026 00:52:58 +0000 (0:00:00.840) 0:06:21.143 ***** 2026-02-28 00:58:37.454812 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.454817 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.454822 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.454828 | orchestrator | 2026-02-28 00:58:37.454833 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-28 00:58:37.454838 | orchestrator | Saturday 28 February 2026 00:53:00 +0000 (0:00:01.439) 0:06:22.583 ***** 2026-02-28 00:58:37.454843 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.454848 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.454853 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.454858 | orchestrator | 2026-02-28 00:58:37.454864 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-28 00:58:37.454869 | orchestrator | Saturday 28 February 2026 00:53:01 +0000 (0:00:01.202) 0:06:23.786 ***** 2026-02-28 00:58:37.454874 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.454879 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.454884 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.454890 | orchestrator | 2026-02-28 00:58:37.454895 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-28 00:58:37.454900 | orchestrator | Saturday 28 February 2026 00:53:03 +0000 (0:00:01.903) 0:06:25.689 ***** 2026-02-28 00:58:37.454905 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.454910 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.454915 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.454921 | orchestrator | 2026-02-28 00:58:37.454926 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-28 00:58:37.454931 | orchestrator | Saturday 28 February 2026 00:53:05 +0000 (0:00:02.240) 0:06:27.930 ***** 2026-02-28 00:58:37.454937 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.454942 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.454953 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-28 00:58:37.454959 | orchestrator | 2026-02-28 00:58:37.454964 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-28 00:58:37.454970 | orchestrator | Saturday 28 February 2026 00:53:06 +0000 (0:00:00.594) 0:06:28.524 ***** 2026-02-28 00:58:37.454999 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-28 00:58:37.455006 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-28 00:58:37.455012 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-28 00:58:37.455017 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-28 00:58:37.455023 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-02-28 00:58:37.455029 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:37.455034 | orchestrator | 2026-02-28 00:58:37.455040 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-28 00:58:37.455045 | orchestrator | Saturday 28 February 2026 00:53:37 +0000 (0:00:31.192) 0:06:59.716 ***** 2026-02-28 00:58:37.455051 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:37.455061 | orchestrator | 2026-02-28 00:58:37.455066 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-28 00:58:37.455072 | orchestrator | Saturday 28 February 2026 00:53:38 +0000 (0:00:01.367) 0:07:01.084 ***** 2026-02-28 00:58:37.455078 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.455083 | orchestrator | 2026-02-28 00:58:37.455089 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-28 00:58:37.455094 | orchestrator | Saturday 28 February 2026 00:53:39 +0000 (0:00:00.412) 0:07:01.497 ***** 2026-02-28 00:58:37.455100 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.455105 | orchestrator | 2026-02-28 00:58:37.455111 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-28 00:58:37.455116 | orchestrator | Saturday 28 February 2026 00:53:39 +0000 (0:00:00.175) 0:07:01.673 ***** 2026-02-28 00:58:37.455122 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-28 00:58:37.455128 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-28 00:58:37.455133 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-28 00:58:37.455139 | orchestrator | 2026-02-28 00:58:37.455144 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-28 00:58:37.455150 | orchestrator | Saturday 28 February 2026 00:53:45 +0000 (0:00:06.563) 0:07:08.237 ***** 2026-02-28 00:58:37.455155 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-28 00:58:37.455161 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-28 00:58:37.455167 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-28 00:58:37.455172 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-28 00:58:37.455178 | orchestrator | 2026-02-28 00:58:37.455183 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:58:37.455189 | orchestrator | Saturday 28 February 2026 00:53:51 +0000 (0:00:05.636) 0:07:13.874 ***** 2026-02-28 00:58:37.455194 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.455200 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.455205 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.455211 | orchestrator | 2026-02-28 00:58:37.455217 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-28 00:58:37.455222 | orchestrator | Saturday 28 February 2026 00:53:52 +0000 (0:00:00.776) 0:07:14.650 ***** 2026-02-28 00:58:37.455228 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.455233 | orchestrator | 2026-02-28 00:58:37.455242 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-28 00:58:37.455251 | orchestrator | Saturday 28 February 2026 00:53:52 +0000 (0:00:00.617) 0:07:15.267 ***** 2026-02-28 00:58:37.455260 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.455268 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.455277 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.455286 | orchestrator | 2026-02-28 00:58:37.455295 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-28 00:58:37.455303 | orchestrator | Saturday 28 February 2026 00:53:53 +0000 (0:00:00.401) 0:07:15.669 ***** 2026-02-28 00:58:37.455312 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.455321 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.455331 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.455340 | orchestrator | 2026-02-28 00:58:37.455349 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-28 00:58:37.455358 | orchestrator | Saturday 28 February 2026 00:53:54 +0000 (0:00:01.137) 0:07:16.806 ***** 2026-02-28 00:58:37.455368 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:58:37.455373 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:58:37.455384 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:58:37.455390 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.455395 | orchestrator | 2026-02-28 00:58:37.455401 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-28 00:58:37.455406 | orchestrator | Saturday 28 February 2026 00:53:54 +0000 (0:00:00.552) 0:07:17.359 ***** 2026-02-28 00:58:37.455412 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.455417 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.455426 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.455432 | orchestrator | 2026-02-28 00:58:37.455438 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-28 00:58:37.455443 | orchestrator | 2026-02-28 00:58:37.455449 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:58:37.455454 | orchestrator | Saturday 28 February 2026 00:53:55 +0000 (0:00:00.704) 0:07:18.063 ***** 2026-02-28 00:58:37.455486 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.455493 | orchestrator | 2026-02-28 00:58:37.455498 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:58:37.455504 | orchestrator | Saturday 28 February 2026 00:53:56 +0000 (0:00:00.448) 0:07:18.512 ***** 2026-02-28 00:58:37.455509 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.455515 | orchestrator | 2026-02-28 00:58:37.455520 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:58:37.455526 | orchestrator | Saturday 28 February 2026 00:53:56 +0000 (0:00:00.617) 0:07:19.129 ***** 2026-02-28 00:58:37.455531 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.455537 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.455542 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.455548 | orchestrator | 2026-02-28 00:58:37.455554 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:58:37.455559 | orchestrator | Saturday 28 February 2026 00:53:56 +0000 (0:00:00.288) 0:07:19.418 ***** 2026-02-28 00:58:37.455565 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.455570 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.455576 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.455581 | orchestrator | 2026-02-28 00:58:37.455587 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:58:37.455593 | orchestrator | Saturday 28 February 2026 00:53:57 +0000 (0:00:00.653) 0:07:20.071 ***** 2026-02-28 00:58:37.455598 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.455603 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.455660 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.455667 | orchestrator | 2026-02-28 00:58:37.455672 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:58:37.455678 | orchestrator | Saturday 28 February 2026 00:53:58 +0000 (0:00:00.705) 0:07:20.777 ***** 2026-02-28 00:58:37.455683 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.455689 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.455694 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.455700 | orchestrator | 2026-02-28 00:58:37.455705 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:58:37.455711 | orchestrator | Saturday 28 February 2026 00:53:59 +0000 (0:00:01.392) 0:07:22.170 ***** 2026-02-28 00:58:37.455716 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.455723 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.455732 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.455740 | orchestrator | 2026-02-28 00:58:37.455748 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:58:37.455756 | orchestrator | Saturday 28 February 2026 00:54:00 +0000 (0:00:00.365) 0:07:22.535 ***** 2026-02-28 00:58:37.455765 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.455773 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.455788 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.455797 | orchestrator | 2026-02-28 00:58:37.455806 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:58:37.455815 | orchestrator | Saturday 28 February 2026 00:54:00 +0000 (0:00:00.397) 0:07:22.932 ***** 2026-02-28 00:58:37.455824 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.455833 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.455841 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.455846 | orchestrator | 2026-02-28 00:58:37.455852 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:58:37.455858 | orchestrator | Saturday 28 February 2026 00:54:00 +0000 (0:00:00.297) 0:07:23.230 ***** 2026-02-28 00:58:37.455864 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.455869 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.455875 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.455880 | orchestrator | 2026-02-28 00:58:37.455886 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:58:37.455891 | orchestrator | Saturday 28 February 2026 00:54:01 +0000 (0:00:01.044) 0:07:24.275 ***** 2026-02-28 00:58:37.455897 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.455902 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.455908 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.455913 | orchestrator | 2026-02-28 00:58:37.455918 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:58:37.455924 | orchestrator | Saturday 28 February 2026 00:54:02 +0000 (0:00:00.723) 0:07:24.998 ***** 2026-02-28 00:58:37.455929 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.455935 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.455940 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.455945 | orchestrator | 2026-02-28 00:58:37.455951 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:58:37.455956 | orchestrator | Saturday 28 February 2026 00:54:02 +0000 (0:00:00.320) 0:07:25.319 ***** 2026-02-28 00:58:37.455962 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.455968 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.455973 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.455979 | orchestrator | 2026-02-28 00:58:37.455984 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:58:37.455989 | orchestrator | Saturday 28 February 2026 00:54:03 +0000 (0:00:00.337) 0:07:25.656 ***** 2026-02-28 00:58:37.455995 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.456000 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.456005 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.456011 | orchestrator | 2026-02-28 00:58:37.456016 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:58:37.456022 | orchestrator | Saturday 28 February 2026 00:54:03 +0000 (0:00:00.660) 0:07:26.317 ***** 2026-02-28 00:58:37.456027 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.456037 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.456043 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.456048 | orchestrator | 2026-02-28 00:58:37.456054 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:58:37.456059 | orchestrator | Saturday 28 February 2026 00:54:04 +0000 (0:00:00.392) 0:07:26.709 ***** 2026-02-28 00:58:37.456065 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.456070 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.456081 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.456086 | orchestrator | 2026-02-28 00:58:37.456092 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:58:37.456097 | orchestrator | Saturday 28 February 2026 00:54:04 +0000 (0:00:00.337) 0:07:27.047 ***** 2026-02-28 00:58:37.456103 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.456108 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.456114 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.456119 | orchestrator | 2026-02-28 00:58:37.456129 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:58:37.456135 | orchestrator | Saturday 28 February 2026 00:54:04 +0000 (0:00:00.308) 0:07:27.356 ***** 2026-02-28 00:58:37.456140 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.456146 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.456151 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.456157 | orchestrator | 2026-02-28 00:58:37.456162 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:58:37.456168 | orchestrator | Saturday 28 February 2026 00:54:05 +0000 (0:00:00.886) 0:07:28.242 ***** 2026-02-28 00:58:37.456173 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.456179 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.456184 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.456190 | orchestrator | 2026-02-28 00:58:37.456195 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:58:37.456201 | orchestrator | Saturday 28 February 2026 00:54:06 +0000 (0:00:00.484) 0:07:28.727 ***** 2026-02-28 00:58:37.456206 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.456211 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.456217 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.456222 | orchestrator | 2026-02-28 00:58:37.456228 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:58:37.456233 | orchestrator | Saturday 28 February 2026 00:54:06 +0000 (0:00:00.384) 0:07:29.112 ***** 2026-02-28 00:58:37.456239 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.456244 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.456251 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.456260 | orchestrator | 2026-02-28 00:58:37.456268 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-28 00:58:37.456313 | orchestrator | Saturday 28 February 2026 00:54:07 +0000 (0:00:01.033) 0:07:30.146 ***** 2026-02-28 00:58:37.456319 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.456325 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.456330 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.456335 | orchestrator | 2026-02-28 00:58:37.456341 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-28 00:58:37.456346 | orchestrator | Saturday 28 February 2026 00:54:08 +0000 (0:00:00.541) 0:07:30.687 ***** 2026-02-28 00:58:37.456352 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 00:58:37.456357 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:58:37.456363 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:58:37.456368 | orchestrator | 2026-02-28 00:58:37.456374 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-28 00:58:37.456379 | orchestrator | Saturday 28 February 2026 00:54:09 +0000 (0:00:00.874) 0:07:31.561 ***** 2026-02-28 00:58:37.456385 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.456390 | orchestrator | 2026-02-28 00:58:37.456396 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-28 00:58:37.456401 | orchestrator | Saturday 28 February 2026 00:54:09 +0000 (0:00:00.568) 0:07:32.130 ***** 2026-02-28 00:58:37.456407 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.456412 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.456418 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.456423 | orchestrator | 2026-02-28 00:58:37.456429 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-28 00:58:37.456434 | orchestrator | Saturday 28 February 2026 00:54:10 +0000 (0:00:00.705) 0:07:32.836 ***** 2026-02-28 00:58:37.456440 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.456445 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.456451 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.456462 | orchestrator | 2026-02-28 00:58:37.456467 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-28 00:58:37.456473 | orchestrator | Saturday 28 February 2026 00:54:10 +0000 (0:00:00.315) 0:07:33.151 ***** 2026-02-28 00:58:37.456478 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.456484 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.456489 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.456495 | orchestrator | 2026-02-28 00:58:37.456500 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-28 00:58:37.456505 | orchestrator | Saturday 28 February 2026 00:54:11 +0000 (0:00:00.628) 0:07:33.779 ***** 2026-02-28 00:58:37.456511 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.456516 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.456522 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.456527 | orchestrator | 2026-02-28 00:58:37.456533 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-28 00:58:37.456538 | orchestrator | Saturday 28 February 2026 00:54:11 +0000 (0:00:00.401) 0:07:34.181 ***** 2026-02-28 00:58:37.456544 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-28 00:58:37.456549 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-28 00:58:37.456558 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-28 00:58:37.456564 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-28 00:58:37.456570 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-28 00:58:37.456581 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-28 00:58:37.456586 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-28 00:58:37.456592 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-28 00:58:37.456597 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-28 00:58:37.456603 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-28 00:58:37.456629 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-28 00:58:37.456635 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-28 00:58:37.456641 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-28 00:58:37.456646 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-28 00:58:37.456652 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-28 00:58:37.456657 | orchestrator | 2026-02-28 00:58:37.456663 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-28 00:58:37.456668 | orchestrator | Saturday 28 February 2026 00:54:14 +0000 (0:00:02.471) 0:07:36.652 ***** 2026-02-28 00:58:37.456674 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.456679 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.456685 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.456690 | orchestrator | 2026-02-28 00:58:37.456696 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-28 00:58:37.456702 | orchestrator | Saturday 28 February 2026 00:54:14 +0000 (0:00:00.361) 0:07:37.014 ***** 2026-02-28 00:58:37.456707 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.456713 | orchestrator | 2026-02-28 00:58:37.456719 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-28 00:58:37.456724 | orchestrator | Saturday 28 February 2026 00:54:15 +0000 (0:00:00.570) 0:07:37.585 ***** 2026-02-28 00:58:37.456730 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-28 00:58:37.456742 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-28 00:58:37.456748 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-28 00:58:37.456754 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-28 00:58:37.456759 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-28 00:58:37.456765 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-28 00:58:37.456770 | orchestrator | 2026-02-28 00:58:37.456776 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-28 00:58:37.456781 | orchestrator | Saturday 28 February 2026 00:54:16 +0000 (0:00:01.429) 0:07:39.015 ***** 2026-02-28 00:58:37.456787 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:37.456792 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:58:37.456798 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:58:37.456804 | orchestrator | 2026-02-28 00:58:37.456809 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-28 00:58:37.456815 | orchestrator | Saturday 28 February 2026 00:54:18 +0000 (0:00:02.215) 0:07:41.230 ***** 2026-02-28 00:58:37.456820 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 00:58:37.456826 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-28 00:58:37.456831 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.456837 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 00:58:37.456842 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:58:37.456848 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.456853 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 00:58:37.456859 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-28 00:58:37.456864 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.456870 | orchestrator | 2026-02-28 00:58:37.456875 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-28 00:58:37.456881 | orchestrator | Saturday 28 February 2026 00:54:20 +0000 (0:00:01.374) 0:07:42.604 ***** 2026-02-28 00:58:37.456886 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:37.456892 | orchestrator | 2026-02-28 00:58:37.456897 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-28 00:58:37.456903 | orchestrator | Saturday 28 February 2026 00:54:22 +0000 (0:00:02.198) 0:07:44.803 ***** 2026-02-28 00:58:37.456909 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.456914 | orchestrator | 2026-02-28 00:58:37.456920 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-28 00:58:37.456926 | orchestrator | Saturday 28 February 2026 00:54:23 +0000 (0:00:00.860) 0:07:45.664 ***** 2026-02-28 00:58:37.456935 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7b073c23-7edc-573a-a84d-7267a4d3e426', 'data_vg': 'ceph-7b073c23-7edc-573a-a84d-7267a4d3e426'}) 2026-02-28 00:58:37.456942 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f012bc14-1358-5d7b-888e-596399f0a0b7', 'data_vg': 'ceph-f012bc14-1358-5d7b-888e-596399f0a0b7'}) 2026-02-28 00:58:37.456951 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-867868d0-bc68-54b2-8c81-3bd5cfa2d741', 'data_vg': 'ceph-867868d0-bc68-54b2-8c81-3bd5cfa2d741'}) 2026-02-28 00:58:37.456957 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-de70aebc-f344-5246-8655-326adc55aaa0', 'data_vg': 'ceph-de70aebc-f344-5246-8655-326adc55aaa0'}) 2026-02-28 00:58:37.456963 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b30b5faa-3070-5965-91f3-7d8dbacf19e9', 'data_vg': 'ceph-b30b5faa-3070-5965-91f3-7d8dbacf19e9'}) 2026-02-28 00:58:37.456968 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ee950762-4564-5222-9e83-52313bf46222', 'data_vg': 'ceph-ee950762-4564-5222-9e83-52313bf46222'}) 2026-02-28 00:58:37.456978 | orchestrator | 2026-02-28 00:58:37.456983 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-28 00:58:37.456989 | orchestrator | Saturday 28 February 2026 00:55:04 +0000 (0:00:41.332) 0:08:26.997 ***** 2026-02-28 00:58:37.456995 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457000 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.457005 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.457011 | orchestrator | 2026-02-28 00:58:37.457017 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-28 00:58:37.457022 | orchestrator | Saturday 28 February 2026 00:55:04 +0000 (0:00:00.297) 0:08:27.294 ***** 2026-02-28 00:58:37.457028 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.457033 | orchestrator | 2026-02-28 00:58:37.457039 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-28 00:58:37.457045 | orchestrator | Saturday 28 February 2026 00:55:05 +0000 (0:00:00.705) 0:08:27.999 ***** 2026-02-28 00:58:37.457050 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.457056 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.457061 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.457067 | orchestrator | 2026-02-28 00:58:37.457072 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-28 00:58:37.457078 | orchestrator | Saturday 28 February 2026 00:55:06 +0000 (0:00:00.662) 0:08:28.662 ***** 2026-02-28 00:58:37.457084 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.457089 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.457095 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.457100 | orchestrator | 2026-02-28 00:58:37.457106 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-28 00:58:37.457111 | orchestrator | Saturday 28 February 2026 00:55:09 +0000 (0:00:02.911) 0:08:31.574 ***** 2026-02-28 00:58:37.457117 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.457123 | orchestrator | 2026-02-28 00:58:37.457128 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-28 00:58:37.457134 | orchestrator | Saturday 28 February 2026 00:55:10 +0000 (0:00:00.949) 0:08:32.524 ***** 2026-02-28 00:58:37.457139 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.457145 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.457150 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.457156 | orchestrator | 2026-02-28 00:58:37.457161 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-28 00:58:37.457167 | orchestrator | Saturday 28 February 2026 00:55:11 +0000 (0:00:01.325) 0:08:33.849 ***** 2026-02-28 00:58:37.457173 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.457178 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.457183 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.457189 | orchestrator | 2026-02-28 00:58:37.457194 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-28 00:58:37.457200 | orchestrator | Saturday 28 February 2026 00:55:12 +0000 (0:00:01.244) 0:08:35.094 ***** 2026-02-28 00:58:37.457205 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.457211 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.457216 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.457222 | orchestrator | 2026-02-28 00:58:37.457227 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-28 00:58:37.457233 | orchestrator | Saturday 28 February 2026 00:55:14 +0000 (0:00:01.883) 0:08:36.977 ***** 2026-02-28 00:58:37.457239 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457244 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.457250 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.457255 | orchestrator | 2026-02-28 00:58:37.457261 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-28 00:58:37.457271 | orchestrator | Saturday 28 February 2026 00:55:15 +0000 (0:00:00.623) 0:08:37.601 ***** 2026-02-28 00:58:37.457276 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457282 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.457287 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.457293 | orchestrator | 2026-02-28 00:58:37.457298 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-28 00:58:37.457304 | orchestrator | Saturday 28 February 2026 00:55:15 +0000 (0:00:00.371) 0:08:37.973 ***** 2026-02-28 00:58:37.457309 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-02-28 00:58:37.457315 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-28 00:58:37.457320 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-28 00:58:37.457326 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-28 00:58:37.457331 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-28 00:58:37.457337 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-28 00:58:37.457343 | orchestrator | 2026-02-28 00:58:37.457348 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-28 00:58:37.457357 | orchestrator | Saturday 28 February 2026 00:55:16 +0000 (0:00:01.145) 0:08:39.118 ***** 2026-02-28 00:58:37.457363 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-02-28 00:58:37.457369 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-02-28 00:58:37.457374 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-28 00:58:37.457380 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-28 00:58:37.457389 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-28 00:58:37.457395 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-28 00:58:37.457401 | orchestrator | 2026-02-28 00:58:37.457407 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-28 00:58:37.457413 | orchestrator | Saturday 28 February 2026 00:55:19 +0000 (0:00:02.404) 0:08:41.523 ***** 2026-02-28 00:58:37.457418 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-02-28 00:58:37.457424 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-02-28 00:58:37.457429 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-28 00:58:37.457435 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-28 00:58:37.457440 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-28 00:58:37.457446 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-28 00:58:37.457451 | orchestrator | 2026-02-28 00:58:37.457457 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-28 00:58:37.457462 | orchestrator | Saturday 28 February 2026 00:55:23 +0000 (0:00:04.068) 0:08:45.592 ***** 2026-02-28 00:58:37.457468 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457473 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.457479 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:37.457484 | orchestrator | 2026-02-28 00:58:37.457490 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-28 00:58:37.457495 | orchestrator | Saturday 28 February 2026 00:55:25 +0000 (0:00:02.781) 0:08:48.373 ***** 2026-02-28 00:58:37.457501 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457506 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.457512 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-28 00:58:37.457518 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:37.457523 | orchestrator | 2026-02-28 00:58:37.457529 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-28 00:58:37.457535 | orchestrator | Saturday 28 February 2026 00:55:38 +0000 (0:00:12.851) 0:09:01.225 ***** 2026-02-28 00:58:37.457540 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457546 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.457551 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.457557 | orchestrator | 2026-02-28 00:58:37.457562 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:58:37.457572 | orchestrator | Saturday 28 February 2026 00:55:39 +0000 (0:00:01.144) 0:09:02.370 ***** 2026-02-28 00:58:37.457578 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457583 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.457589 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.457594 | orchestrator | 2026-02-28 00:58:37.457600 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-28 00:58:37.457623 | orchestrator | Saturday 28 February 2026 00:55:40 +0000 (0:00:00.406) 0:09:02.777 ***** 2026-02-28 00:58:37.457629 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.457635 | orchestrator | 2026-02-28 00:58:37.457640 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-28 00:58:37.457646 | orchestrator | Saturday 28 February 2026 00:55:40 +0000 (0:00:00.525) 0:09:03.302 ***** 2026-02-28 00:58:37.457652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:37.457657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:37.457663 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:37.457668 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457673 | orchestrator | 2026-02-28 00:58:37.457679 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-28 00:58:37.457684 | orchestrator | Saturday 28 February 2026 00:55:41 +0000 (0:00:01.016) 0:09:04.318 ***** 2026-02-28 00:58:37.457690 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457695 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.457701 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.457706 | orchestrator | 2026-02-28 00:58:37.457712 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-28 00:58:37.457717 | orchestrator | Saturday 28 February 2026 00:55:42 +0000 (0:00:00.338) 0:09:04.657 ***** 2026-02-28 00:58:37.457723 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457728 | orchestrator | 2026-02-28 00:58:37.457734 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-28 00:58:37.457739 | orchestrator | Saturday 28 February 2026 00:55:42 +0000 (0:00:00.273) 0:09:04.930 ***** 2026-02-28 00:58:37.457745 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457751 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.457756 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.457761 | orchestrator | 2026-02-28 00:58:37.457767 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-28 00:58:37.457773 | orchestrator | Saturday 28 February 2026 00:55:42 +0000 (0:00:00.327) 0:09:05.259 ***** 2026-02-28 00:58:37.457778 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457784 | orchestrator | 2026-02-28 00:58:37.457789 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-28 00:58:37.457795 | orchestrator | Saturday 28 February 2026 00:55:43 +0000 (0:00:00.252) 0:09:05.512 ***** 2026-02-28 00:58:37.457801 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457806 | orchestrator | 2026-02-28 00:58:37.457812 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-28 00:58:37.457817 | orchestrator | Saturday 28 February 2026 00:55:43 +0000 (0:00:00.271) 0:09:05.783 ***** 2026-02-28 00:58:37.457826 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457832 | orchestrator | 2026-02-28 00:58:37.457838 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-28 00:58:37.457844 | orchestrator | Saturday 28 February 2026 00:55:43 +0000 (0:00:00.181) 0:09:05.965 ***** 2026-02-28 00:58:37.457849 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457854 | orchestrator | 2026-02-28 00:58:37.457864 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-28 00:58:37.457870 | orchestrator | Saturday 28 February 2026 00:55:43 +0000 (0:00:00.265) 0:09:06.230 ***** 2026-02-28 00:58:37.457876 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457886 | orchestrator | 2026-02-28 00:58:37.457892 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-28 00:58:37.457898 | orchestrator | Saturday 28 February 2026 00:55:44 +0000 (0:00:00.902) 0:09:07.133 ***** 2026-02-28 00:58:37.457903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:37.457909 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:37.457914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:37.457920 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457925 | orchestrator | 2026-02-28 00:58:37.457931 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-28 00:58:37.457936 | orchestrator | Saturday 28 February 2026 00:55:45 +0000 (0:00:00.409) 0:09:07.542 ***** 2026-02-28 00:58:37.457942 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457947 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.457953 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.457958 | orchestrator | 2026-02-28 00:58:37.457964 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-28 00:58:37.457969 | orchestrator | Saturday 28 February 2026 00:55:45 +0000 (0:00:00.382) 0:09:07.925 ***** 2026-02-28 00:58:37.457975 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.457980 | orchestrator | 2026-02-28 00:58:37.457986 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-28 00:58:37.457991 | orchestrator | Saturday 28 February 2026 00:55:45 +0000 (0:00:00.240) 0:09:08.165 ***** 2026-02-28 00:58:37.457997 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.458002 | orchestrator | 2026-02-28 00:58:37.458008 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-28 00:58:37.458039 | orchestrator | 2026-02-28 00:58:37.458047 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:58:37.458052 | orchestrator | Saturday 28 February 2026 00:55:46 +0000 (0:00:00.959) 0:09:09.125 ***** 2026-02-28 00:58:37.458058 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.458064 | orchestrator | 2026-02-28 00:58:37.458070 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:58:37.458076 | orchestrator | Saturday 28 February 2026 00:55:47 +0000 (0:00:01.220) 0:09:10.346 ***** 2026-02-28 00:58:37.458081 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.458087 | orchestrator | 2026-02-28 00:58:37.458092 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:58:37.458098 | orchestrator | Saturday 28 February 2026 00:55:49 +0000 (0:00:01.117) 0:09:11.464 ***** 2026-02-28 00:58:37.458103 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.458109 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.458114 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.458120 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.458125 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.458131 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.458136 | orchestrator | 2026-02-28 00:58:37.458141 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:58:37.458147 | orchestrator | Saturday 28 February 2026 00:55:50 +0000 (0:00:01.347) 0:09:12.812 ***** 2026-02-28 00:58:37.458153 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.458158 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.458164 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.458169 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.458175 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.458180 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.458186 | orchestrator | 2026-02-28 00:58:37.458196 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:58:37.458202 | orchestrator | Saturday 28 February 2026 00:55:51 +0000 (0:00:00.728) 0:09:13.540 ***** 2026-02-28 00:58:37.458207 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.458213 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.458218 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.458224 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.458229 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.458235 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.458240 | orchestrator | 2026-02-28 00:58:37.458246 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:58:37.458252 | orchestrator | Saturday 28 February 2026 00:55:52 +0000 (0:00:01.076) 0:09:14.616 ***** 2026-02-28 00:58:37.458257 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.458263 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.458268 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.458274 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.458279 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.458285 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.458290 | orchestrator | 2026-02-28 00:58:37.458296 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:58:37.458301 | orchestrator | Saturday 28 February 2026 00:55:52 +0000 (0:00:00.731) 0:09:15.347 ***** 2026-02-28 00:58:37.458306 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.458312 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.458317 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.458323 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.458328 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.458337 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.458343 | orchestrator | 2026-02-28 00:58:37.458349 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:58:37.458354 | orchestrator | Saturday 28 February 2026 00:55:54 +0000 (0:00:01.360) 0:09:16.708 ***** 2026-02-28 00:58:37.458360 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.458365 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.458375 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.458381 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.458386 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.458392 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.458397 | orchestrator | 2026-02-28 00:58:37.458403 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:58:37.458408 | orchestrator | Saturday 28 February 2026 00:55:54 +0000 (0:00:00.653) 0:09:17.362 ***** 2026-02-28 00:58:37.458414 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.458419 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.458425 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.458430 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.458436 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.458441 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.458446 | orchestrator | 2026-02-28 00:58:37.458452 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:58:37.458457 | orchestrator | Saturday 28 February 2026 00:55:56 +0000 (0:00:01.115) 0:09:18.478 ***** 2026-02-28 00:58:37.458463 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.458468 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.458474 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.458479 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.458485 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.458490 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.458496 | orchestrator | 2026-02-28 00:58:37.458501 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:58:37.458507 | orchestrator | Saturday 28 February 2026 00:55:57 +0000 (0:00:01.215) 0:09:19.694 ***** 2026-02-28 00:58:37.458512 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.458522 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.458528 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.458533 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.458538 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.458544 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.458549 | orchestrator | 2026-02-28 00:58:37.458555 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:58:37.458560 | orchestrator | Saturday 28 February 2026 00:55:58 +0000 (0:00:01.653) 0:09:21.347 ***** 2026-02-28 00:58:37.458566 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.458571 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.458577 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.458582 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.458587 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.458593 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.458598 | orchestrator | 2026-02-28 00:58:37.458604 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:58:37.458625 | orchestrator | Saturday 28 February 2026 00:55:59 +0000 (0:00:00.680) 0:09:22.028 ***** 2026-02-28 00:58:37.458634 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.458643 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.458653 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.458661 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.458670 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.458679 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.458685 | orchestrator | 2026-02-28 00:58:37.458691 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:58:37.458696 | orchestrator | Saturday 28 February 2026 00:56:00 +0000 (0:00:00.925) 0:09:22.954 ***** 2026-02-28 00:58:37.458702 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.458708 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.458713 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.458718 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.458724 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.458729 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.458734 | orchestrator | 2026-02-28 00:58:37.458740 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:58:37.458746 | orchestrator | Saturday 28 February 2026 00:56:01 +0000 (0:00:00.655) 0:09:23.609 ***** 2026-02-28 00:58:37.458751 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.458757 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.458762 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.458767 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.458773 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.458778 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.458784 | orchestrator | 2026-02-28 00:58:37.458789 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:58:37.458794 | orchestrator | Saturday 28 February 2026 00:56:02 +0000 (0:00:00.922) 0:09:24.531 ***** 2026-02-28 00:58:37.458800 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.458805 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.458811 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.458816 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.458822 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.458828 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.458833 | orchestrator | 2026-02-28 00:58:37.458839 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:58:37.458844 | orchestrator | Saturday 28 February 2026 00:56:02 +0000 (0:00:00.699) 0:09:25.231 ***** 2026-02-28 00:58:37.458850 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.458855 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.458860 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.458866 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.458871 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.458877 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.458887 | orchestrator | 2026-02-28 00:58:37.458893 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:58:37.458898 | orchestrator | Saturday 28 February 2026 00:56:03 +0000 (0:00:00.961) 0:09:26.192 ***** 2026-02-28 00:58:37.458904 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.458909 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.458915 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.458924 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:37.458929 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:37.458935 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:37.458940 | orchestrator | 2026-02-28 00:58:37.458946 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:58:37.458951 | orchestrator | Saturday 28 February 2026 00:56:04 +0000 (0:00:00.731) 0:09:26.924 ***** 2026-02-28 00:58:37.458961 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.458967 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.458972 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.458978 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.458983 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.458989 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.458994 | orchestrator | 2026-02-28 00:58:37.459000 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:58:37.459005 | orchestrator | Saturday 28 February 2026 00:56:05 +0000 (0:00:00.971) 0:09:27.895 ***** 2026-02-28 00:58:37.459011 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.459016 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.459022 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.459027 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.459032 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.459038 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.459043 | orchestrator | 2026-02-28 00:58:37.459048 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:58:37.459054 | orchestrator | Saturday 28 February 2026 00:56:06 +0000 (0:00:00.746) 0:09:28.641 ***** 2026-02-28 00:58:37.459059 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.459065 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.459070 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.459076 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.459081 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.459087 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.459092 | orchestrator | 2026-02-28 00:58:37.459097 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-28 00:58:37.459103 | orchestrator | Saturday 28 February 2026 00:56:07 +0000 (0:00:01.381) 0:09:30.023 ***** 2026-02-28 00:58:37.459108 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:37.459114 | orchestrator | 2026-02-28 00:58:37.459119 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-28 00:58:37.459125 | orchestrator | Saturday 28 February 2026 00:56:11 +0000 (0:00:04.153) 0:09:34.177 ***** 2026-02-28 00:58:37.459130 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:37.459136 | orchestrator | 2026-02-28 00:58:37.459151 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-28 00:58:37.459157 | orchestrator | Saturday 28 February 2026 00:56:14 +0000 (0:00:02.464) 0:09:36.641 ***** 2026-02-28 00:58:37.459162 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.459168 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.459173 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.459186 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.459191 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.459197 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.459203 | orchestrator | 2026-02-28 00:58:37.459208 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-28 00:58:37.459214 | orchestrator | Saturday 28 February 2026 00:56:16 +0000 (0:00:01.880) 0:09:38.522 ***** 2026-02-28 00:58:37.459225 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.459231 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.459236 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.459242 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.459247 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.459253 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.459258 | orchestrator | 2026-02-28 00:58:37.459264 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-28 00:58:37.459269 | orchestrator | Saturday 28 February 2026 00:56:17 +0000 (0:00:01.033) 0:09:39.555 ***** 2026-02-28 00:58:37.459275 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.459281 | orchestrator | 2026-02-28 00:58:37.459287 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-28 00:58:37.459292 | orchestrator | Saturday 28 February 2026 00:56:18 +0000 (0:00:01.367) 0:09:40.923 ***** 2026-02-28 00:58:37.459298 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.459303 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.459308 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.459314 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.459319 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.459325 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.459330 | orchestrator | 2026-02-28 00:58:37.459335 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-28 00:58:37.459341 | orchestrator | Saturday 28 February 2026 00:56:20 +0000 (0:00:01.922) 0:09:42.846 ***** 2026-02-28 00:58:37.459346 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.459352 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.459357 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.459363 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.459368 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.459374 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.459379 | orchestrator | 2026-02-28 00:58:37.459384 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-28 00:58:37.459390 | orchestrator | Saturday 28 February 2026 00:56:23 +0000 (0:00:03.390) 0:09:46.237 ***** 2026-02-28 00:58:37.459396 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:37.459401 | orchestrator | 2026-02-28 00:58:37.459407 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-28 00:58:37.459412 | orchestrator | Saturday 28 February 2026 00:56:25 +0000 (0:00:01.527) 0:09:47.764 ***** 2026-02-28 00:58:37.459418 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.459424 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.459429 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.459438 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.459444 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.459450 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.459455 | orchestrator | 2026-02-28 00:58:37.459461 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-28 00:58:37.459466 | orchestrator | Saturday 28 February 2026 00:56:26 +0000 (0:00:00.955) 0:09:48.720 ***** 2026-02-28 00:58:37.459475 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.459480 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.459486 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.459491 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:37.459497 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:37.459502 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:37.459507 | orchestrator | 2026-02-28 00:58:37.459513 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-28 00:58:37.459518 | orchestrator | Saturday 28 February 2026 00:56:28 +0000 (0:00:02.307) 0:09:51.028 ***** 2026-02-28 00:58:37.459531 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.459536 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.459541 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.459547 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:37.459552 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:37.459557 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:37.459563 | orchestrator | 2026-02-28 00:58:37.459568 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-28 00:58:37.459574 | orchestrator | 2026-02-28 00:58:37.459579 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:58:37.459585 | orchestrator | Saturday 28 February 2026 00:56:29 +0000 (0:00:01.262) 0:09:52.290 ***** 2026-02-28 00:58:37.459590 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.459595 | orchestrator | 2026-02-28 00:58:37.459601 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:58:37.459650 | orchestrator | Saturday 28 February 2026 00:56:30 +0000 (0:00:00.585) 0:09:52.875 ***** 2026-02-28 00:58:37.459656 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.459662 | orchestrator | 2026-02-28 00:58:37.459668 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:58:37.459673 | orchestrator | Saturday 28 February 2026 00:56:31 +0000 (0:00:01.032) 0:09:53.908 ***** 2026-02-28 00:58:37.459679 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.459684 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.459689 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.459695 | orchestrator | 2026-02-28 00:58:37.459700 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:58:37.459706 | orchestrator | Saturday 28 February 2026 00:56:31 +0000 (0:00:00.334) 0:09:54.243 ***** 2026-02-28 00:58:37.459711 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.459717 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.459722 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.459728 | orchestrator | 2026-02-28 00:58:37.459733 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:58:37.459738 | orchestrator | Saturday 28 February 2026 00:56:32 +0000 (0:00:00.746) 0:09:54.989 ***** 2026-02-28 00:58:37.459744 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.459749 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.459755 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.459760 | orchestrator | 2026-02-28 00:58:37.459765 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:58:37.459770 | orchestrator | Saturday 28 February 2026 00:56:33 +0000 (0:00:01.076) 0:09:56.066 ***** 2026-02-28 00:58:37.459775 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.459780 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.459785 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.459789 | orchestrator | 2026-02-28 00:58:37.459794 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:58:37.459799 | orchestrator | Saturday 28 February 2026 00:56:34 +0000 (0:00:00.780) 0:09:56.847 ***** 2026-02-28 00:58:37.459804 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.459809 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.459814 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.459819 | orchestrator | 2026-02-28 00:58:37.459823 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:58:37.459828 | orchestrator | Saturday 28 February 2026 00:56:34 +0000 (0:00:00.316) 0:09:57.163 ***** 2026-02-28 00:58:37.459833 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.459838 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.459843 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.459847 | orchestrator | 2026-02-28 00:58:37.459853 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:58:37.459906 | orchestrator | Saturday 28 February 2026 00:56:35 +0000 (0:00:00.315) 0:09:57.478 ***** 2026-02-28 00:58:37.459912 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.459917 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.459922 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.459926 | orchestrator | 2026-02-28 00:58:37.459931 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:58:37.459936 | orchestrator | Saturday 28 February 2026 00:56:35 +0000 (0:00:00.646) 0:09:58.125 ***** 2026-02-28 00:58:37.459941 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.459946 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.459951 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.459956 | orchestrator | 2026-02-28 00:58:37.459961 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:58:37.459965 | orchestrator | Saturday 28 February 2026 00:56:36 +0000 (0:00:00.834) 0:09:58.960 ***** 2026-02-28 00:58:37.459970 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.459977 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.459984 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.459991 | orchestrator | 2026-02-28 00:58:37.460005 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:58:37.460015 | orchestrator | Saturday 28 February 2026 00:56:37 +0000 (0:00:00.914) 0:09:59.875 ***** 2026-02-28 00:58:37.460026 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.460035 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.460042 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.460049 | orchestrator | 2026-02-28 00:58:37.460056 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:58:37.460064 | orchestrator | Saturday 28 February 2026 00:56:37 +0000 (0:00:00.335) 0:10:00.210 ***** 2026-02-28 00:58:37.460077 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.460085 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.460093 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.460101 | orchestrator | 2026-02-28 00:58:37.460109 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:58:37.460118 | orchestrator | Saturday 28 February 2026 00:56:38 +0000 (0:00:00.725) 0:10:00.935 ***** 2026-02-28 00:58:37.460125 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.460133 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.460140 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.460145 | orchestrator | 2026-02-28 00:58:37.460150 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:58:37.460155 | orchestrator | Saturday 28 February 2026 00:56:38 +0000 (0:00:00.430) 0:10:01.366 ***** 2026-02-28 00:58:37.460160 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.460164 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.460169 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.460174 | orchestrator | 2026-02-28 00:58:37.460179 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:58:37.460183 | orchestrator | Saturday 28 February 2026 00:56:39 +0000 (0:00:00.496) 0:10:01.863 ***** 2026-02-28 00:58:37.460188 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.460193 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.460198 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.460203 | orchestrator | 2026-02-28 00:58:37.460207 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:58:37.460212 | orchestrator | Saturday 28 February 2026 00:56:39 +0000 (0:00:00.353) 0:10:02.216 ***** 2026-02-28 00:58:37.460217 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.460222 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.460227 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.460232 | orchestrator | 2026-02-28 00:58:37.460237 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:58:37.460242 | orchestrator | Saturday 28 February 2026 00:56:40 +0000 (0:00:00.683) 0:10:02.900 ***** 2026-02-28 00:58:37.460252 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.460257 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.460262 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.460267 | orchestrator | 2026-02-28 00:58:37.460272 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:58:37.460277 | orchestrator | Saturday 28 February 2026 00:56:40 +0000 (0:00:00.347) 0:10:03.247 ***** 2026-02-28 00:58:37.460281 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.460286 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.460291 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.460296 | orchestrator | 2026-02-28 00:58:37.460301 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:58:37.460306 | orchestrator | Saturday 28 February 2026 00:56:41 +0000 (0:00:00.366) 0:10:03.614 ***** 2026-02-28 00:58:37.460311 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.460316 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.460320 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.460325 | orchestrator | 2026-02-28 00:58:37.460330 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:58:37.460335 | orchestrator | Saturday 28 February 2026 00:56:41 +0000 (0:00:00.343) 0:10:03.958 ***** 2026-02-28 00:58:37.460340 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.460345 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.460350 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.460354 | orchestrator | 2026-02-28 00:58:37.460359 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-28 00:58:37.460364 | orchestrator | Saturday 28 February 2026 00:56:42 +0000 (0:00:00.868) 0:10:04.827 ***** 2026-02-28 00:58:37.460369 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.460374 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.460379 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-28 00:58:37.460384 | orchestrator | 2026-02-28 00:58:37.460389 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-28 00:58:37.460393 | orchestrator | Saturday 28 February 2026 00:56:42 +0000 (0:00:00.567) 0:10:05.394 ***** 2026-02-28 00:58:37.460398 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:37.460403 | orchestrator | 2026-02-28 00:58:37.460408 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-28 00:58:37.460413 | orchestrator | Saturday 28 February 2026 00:56:45 +0000 (0:00:02.274) 0:10:07.669 ***** 2026-02-28 00:58:37.460420 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-28 00:58:37.460427 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.460432 | orchestrator | 2026-02-28 00:58:37.460437 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-28 00:58:37.460442 | orchestrator | Saturday 28 February 2026 00:56:45 +0000 (0:00:00.260) 0:10:07.929 ***** 2026-02-28 00:58:37.460448 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 00:58:37.460462 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 00:58:37.460467 | orchestrator | 2026-02-28 00:58:37.460473 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-28 00:58:37.460481 | orchestrator | Saturday 28 February 2026 00:56:54 +0000 (0:00:09.236) 0:10:17.166 ***** 2026-02-28 00:58:37.460492 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:37.460500 | orchestrator | 2026-02-28 00:58:37.460508 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-28 00:58:37.460515 | orchestrator | Saturday 28 February 2026 00:56:58 +0000 (0:00:04.064) 0:10:21.230 ***** 2026-02-28 00:58:37.460523 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.460531 | orchestrator | 2026-02-28 00:58:37.460537 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-28 00:58:37.460544 | orchestrator | Saturday 28 February 2026 00:56:59 +0000 (0:00:01.030) 0:10:22.260 ***** 2026-02-28 00:58:37.460552 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-28 00:58:37.460559 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-28 00:58:37.460566 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-28 00:58:37.460573 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-28 00:58:37.460581 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-28 00:58:37.460588 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-28 00:58:37.460596 | orchestrator | 2026-02-28 00:58:37.460603 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-28 00:58:37.460628 | orchestrator | Saturday 28 February 2026 00:57:01 +0000 (0:00:01.395) 0:10:23.656 ***** 2026-02-28 00:58:37.460636 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:37.460644 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:58:37.460652 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:58:37.460660 | orchestrator | 2026-02-28 00:58:37.460668 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-28 00:58:37.460676 | orchestrator | Saturday 28 February 2026 00:57:03 +0000 (0:00:02.333) 0:10:25.990 ***** 2026-02-28 00:58:37.460684 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 00:58:37.460693 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:58:37.460702 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.460709 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 00:58:37.460717 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-28 00:58:37.460726 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.460733 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 00:58:37.460741 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-28 00:58:37.460748 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.460755 | orchestrator | 2026-02-28 00:58:37.460763 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-28 00:58:37.460772 | orchestrator | Saturday 28 February 2026 00:57:04 +0000 (0:00:01.414) 0:10:27.404 ***** 2026-02-28 00:58:37.460779 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.460787 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.460796 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.460803 | orchestrator | 2026-02-28 00:58:37.460812 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-28 00:58:37.460819 | orchestrator | Saturday 28 February 2026 00:57:07 +0000 (0:00:02.384) 0:10:29.789 ***** 2026-02-28 00:58:37.460828 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.460836 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.460843 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.460851 | orchestrator | 2026-02-28 00:58:37.460858 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-28 00:58:37.460866 | orchestrator | Saturday 28 February 2026 00:57:07 +0000 (0:00:00.340) 0:10:30.130 ***** 2026-02-28 00:58:37.460874 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.460889 | orchestrator | 2026-02-28 00:58:37.460896 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-28 00:58:37.460903 | orchestrator | Saturday 28 February 2026 00:57:08 +0000 (0:00:00.737) 0:10:30.868 ***** 2026-02-28 00:58:37.460910 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.460917 | orchestrator | 2026-02-28 00:58:37.460923 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-28 00:58:37.460931 | orchestrator | Saturday 28 February 2026 00:57:09 +0000 (0:00:00.684) 0:10:31.553 ***** 2026-02-28 00:58:37.460938 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.460945 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.460953 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.460960 | orchestrator | 2026-02-28 00:58:37.460967 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-28 00:58:37.460974 | orchestrator | Saturday 28 February 2026 00:57:10 +0000 (0:00:01.636) 0:10:33.190 ***** 2026-02-28 00:58:37.460981 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.460988 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.460995 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.461003 | orchestrator | 2026-02-28 00:58:37.461010 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-28 00:58:37.461018 | orchestrator | Saturday 28 February 2026 00:57:12 +0000 (0:00:01.531) 0:10:34.721 ***** 2026-02-28 00:58:37.461025 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.461037 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.461044 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.461052 | orchestrator | 2026-02-28 00:58:37.461060 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-28 00:58:37.461067 | orchestrator | Saturday 28 February 2026 00:57:14 +0000 (0:00:02.130) 0:10:36.851 ***** 2026-02-28 00:58:37.461075 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.461092 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.461100 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.461109 | orchestrator | 2026-02-28 00:58:37.461117 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-28 00:58:37.461126 | orchestrator | Saturday 28 February 2026 00:57:16 +0000 (0:00:01.996) 0:10:38.847 ***** 2026-02-28 00:58:37.461133 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.461142 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.461149 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.461157 | orchestrator | 2026-02-28 00:58:37.461164 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:58:37.461172 | orchestrator | Saturday 28 February 2026 00:57:17 +0000 (0:00:01.323) 0:10:40.171 ***** 2026-02-28 00:58:37.461178 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.461187 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.461195 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.461202 | orchestrator | 2026-02-28 00:58:37.461209 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-28 00:58:37.461217 | orchestrator | Saturday 28 February 2026 00:57:18 +0000 (0:00:00.616) 0:10:40.788 ***** 2026-02-28 00:58:37.461225 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.461233 | orchestrator | 2026-02-28 00:58:37.461241 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-28 00:58:37.461249 | orchestrator | Saturday 28 February 2026 00:57:19 +0000 (0:00:00.699) 0:10:41.488 ***** 2026-02-28 00:58:37.461257 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.461265 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.461273 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.461282 | orchestrator | 2026-02-28 00:58:37.461289 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-28 00:58:37.461304 | orchestrator | Saturday 28 February 2026 00:57:19 +0000 (0:00:00.287) 0:10:41.775 ***** 2026-02-28 00:58:37.461312 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.461320 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.461329 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.461335 | orchestrator | 2026-02-28 00:58:37.461343 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-28 00:58:37.461350 | orchestrator | Saturday 28 February 2026 00:57:20 +0000 (0:00:01.130) 0:10:42.906 ***** 2026-02-28 00:58:37.461358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:37.461366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:37.461374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:37.461383 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.461391 | orchestrator | 2026-02-28 00:58:37.461398 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-28 00:58:37.461406 | orchestrator | Saturday 28 February 2026 00:57:21 +0000 (0:00:00.780) 0:10:43.687 ***** 2026-02-28 00:58:37.461414 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.461421 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.461430 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.461438 | orchestrator | 2026-02-28 00:58:37.461445 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-28 00:58:37.461453 | orchestrator | 2026-02-28 00:58:37.461461 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:58:37.461470 | orchestrator | Saturday 28 February 2026 00:57:21 +0000 (0:00:00.690) 0:10:44.378 ***** 2026-02-28 00:58:37.461478 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.461487 | orchestrator | 2026-02-28 00:58:37.461495 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:58:37.461503 | orchestrator | Saturday 28 February 2026 00:57:22 +0000 (0:00:00.507) 0:10:44.886 ***** 2026-02-28 00:58:37.461512 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.461521 | orchestrator | 2026-02-28 00:58:37.461530 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:58:37.461538 | orchestrator | Saturday 28 February 2026 00:57:23 +0000 (0:00:00.643) 0:10:45.529 ***** 2026-02-28 00:58:37.461546 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.461554 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.461562 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.461571 | orchestrator | 2026-02-28 00:58:37.461579 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:58:37.461587 | orchestrator | Saturday 28 February 2026 00:57:23 +0000 (0:00:00.332) 0:10:45.862 ***** 2026-02-28 00:58:37.461595 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.461600 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.461623 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.461629 | orchestrator | 2026-02-28 00:58:37.461634 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:58:37.461639 | orchestrator | Saturday 28 February 2026 00:57:24 +0000 (0:00:00.701) 0:10:46.564 ***** 2026-02-28 00:58:37.461644 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.461649 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.461654 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.461659 | orchestrator | 2026-02-28 00:58:37.461664 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:58:37.461668 | orchestrator | Saturday 28 February 2026 00:57:25 +0000 (0:00:01.050) 0:10:47.615 ***** 2026-02-28 00:58:37.461673 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.461678 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.461683 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.461694 | orchestrator | 2026-02-28 00:58:37.461699 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:58:37.461704 | orchestrator | Saturday 28 February 2026 00:57:25 +0000 (0:00:00.751) 0:10:48.366 ***** 2026-02-28 00:58:37.461709 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.461715 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.461720 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.461724 | orchestrator | 2026-02-28 00:58:37.461737 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:58:37.461743 | orchestrator | Saturday 28 February 2026 00:57:26 +0000 (0:00:00.406) 0:10:48.772 ***** 2026-02-28 00:58:37.461747 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.461753 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.461757 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.461762 | orchestrator | 2026-02-28 00:58:37.461767 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:58:37.461772 | orchestrator | Saturday 28 February 2026 00:57:26 +0000 (0:00:00.326) 0:10:49.099 ***** 2026-02-28 00:58:37.461777 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.461845 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.461863 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.461868 | orchestrator | 2026-02-28 00:58:37.461873 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:58:37.461878 | orchestrator | Saturday 28 February 2026 00:57:27 +0000 (0:00:00.614) 0:10:49.713 ***** 2026-02-28 00:58:37.461883 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.461888 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.461893 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.461898 | orchestrator | 2026-02-28 00:58:37.461903 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:58:37.461907 | orchestrator | Saturday 28 February 2026 00:57:28 +0000 (0:00:00.765) 0:10:50.479 ***** 2026-02-28 00:58:37.461912 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.461917 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.461922 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.461927 | orchestrator | 2026-02-28 00:58:37.461932 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:58:37.461936 | orchestrator | Saturday 28 February 2026 00:57:28 +0000 (0:00:00.905) 0:10:51.384 ***** 2026-02-28 00:58:37.461941 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.461946 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.461951 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.461956 | orchestrator | 2026-02-28 00:58:37.461961 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:58:37.461966 | orchestrator | Saturday 28 February 2026 00:57:29 +0000 (0:00:00.383) 0:10:51.768 ***** 2026-02-28 00:58:37.461970 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.461975 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.461980 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.461985 | orchestrator | 2026-02-28 00:58:37.461990 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:58:37.461994 | orchestrator | Saturday 28 February 2026 00:57:29 +0000 (0:00:00.304) 0:10:52.073 ***** 2026-02-28 00:58:37.461999 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.462004 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.462009 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.462038 | orchestrator | 2026-02-28 00:58:37.462044 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:58:37.462049 | orchestrator | Saturday 28 February 2026 00:57:30 +0000 (0:00:00.719) 0:10:52.792 ***** 2026-02-28 00:58:37.462054 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.462059 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.462064 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.462068 | orchestrator | 2026-02-28 00:58:37.462074 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:58:37.462083 | orchestrator | Saturday 28 February 2026 00:57:30 +0000 (0:00:00.358) 0:10:53.151 ***** 2026-02-28 00:58:37.462088 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.462093 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.462098 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.462103 | orchestrator | 2026-02-28 00:58:37.462108 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:58:37.462113 | orchestrator | Saturday 28 February 2026 00:57:31 +0000 (0:00:00.336) 0:10:53.488 ***** 2026-02-28 00:58:37.462117 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.462122 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.462127 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.462132 | orchestrator | 2026-02-28 00:58:37.462137 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:58:37.462142 | orchestrator | Saturday 28 February 2026 00:57:31 +0000 (0:00:00.302) 0:10:53.791 ***** 2026-02-28 00:58:37.462146 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.462151 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.462156 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.462161 | orchestrator | 2026-02-28 00:58:37.462166 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:58:37.462171 | orchestrator | Saturday 28 February 2026 00:57:31 +0000 (0:00:00.622) 0:10:54.413 ***** 2026-02-28 00:58:37.462175 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.462180 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.462185 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.462190 | orchestrator | 2026-02-28 00:58:37.462195 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:58:37.462200 | orchestrator | Saturday 28 February 2026 00:57:32 +0000 (0:00:00.361) 0:10:54.774 ***** 2026-02-28 00:58:37.462205 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.462209 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.462214 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.462219 | orchestrator | 2026-02-28 00:58:37.462224 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:58:37.462229 | orchestrator | Saturday 28 February 2026 00:57:32 +0000 (0:00:00.348) 0:10:55.123 ***** 2026-02-28 00:58:37.462234 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.462238 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.462243 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.462248 | orchestrator | 2026-02-28 00:58:37.462253 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-28 00:58:37.462261 | orchestrator | Saturday 28 February 2026 00:57:33 +0000 (0:00:00.819) 0:10:55.942 ***** 2026-02-28 00:58:37.462266 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.462272 | orchestrator | 2026-02-28 00:58:37.462277 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-28 00:58:37.462287 | orchestrator | Saturday 28 February 2026 00:57:34 +0000 (0:00:00.573) 0:10:56.516 ***** 2026-02-28 00:58:37.462292 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:37.462297 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:58:37.462302 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:58:37.462307 | orchestrator | 2026-02-28 00:58:37.462312 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-28 00:58:37.462317 | orchestrator | Saturday 28 February 2026 00:57:36 +0000 (0:00:02.373) 0:10:58.889 ***** 2026-02-28 00:58:37.462322 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 00:58:37.462327 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:58:37.462331 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.462336 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 00:58:37.462341 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-28 00:58:37.462350 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.462355 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 00:58:37.462360 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-28 00:58:37.462365 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.462370 | orchestrator | 2026-02-28 00:58:37.462375 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-28 00:58:37.462380 | orchestrator | Saturday 28 February 2026 00:57:38 +0000 (0:00:01.564) 0:11:00.454 ***** 2026-02-28 00:58:37.462385 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.462390 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.462399 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.462408 | orchestrator | 2026-02-28 00:58:37.462416 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-28 00:58:37.462425 | orchestrator | Saturday 28 February 2026 00:57:38 +0000 (0:00:00.383) 0:11:00.838 ***** 2026-02-28 00:58:37.462434 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.462443 | orchestrator | 2026-02-28 00:58:37.462452 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-28 00:58:37.462461 | orchestrator | Saturday 28 February 2026 00:57:39 +0000 (0:00:00.601) 0:11:01.439 ***** 2026-02-28 00:58:37.462469 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:37.462475 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:37.462480 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:37.462485 | orchestrator | 2026-02-28 00:58:37.462490 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-28 00:58:37.462495 | orchestrator | Saturday 28 February 2026 00:57:40 +0000 (0:00:01.600) 0:11:03.040 ***** 2026-02-28 00:58:37.462500 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:37.462505 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-28 00:58:37.462510 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:37.462515 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-28 00:58:37.462519 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:37.462524 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-28 00:58:37.462529 | orchestrator | 2026-02-28 00:58:37.462534 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-28 00:58:37.462539 | orchestrator | Saturday 28 February 2026 00:57:45 +0000 (0:00:05.028) 0:11:08.068 ***** 2026-02-28 00:58:37.462544 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:37.462549 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:58:37.462553 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:37.462558 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:58:37.462563 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:37.462568 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:58:37.462573 | orchestrator | 2026-02-28 00:58:37.462578 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-28 00:58:37.462588 | orchestrator | Saturday 28 February 2026 00:57:48 +0000 (0:00:02.417) 0:11:10.485 ***** 2026-02-28 00:58:37.462593 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 00:58:37.462597 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.462645 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 00:58:37.462652 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.462657 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 00:58:37.462661 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.462666 | orchestrator | 2026-02-28 00:58:37.462671 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-28 00:58:37.462680 | orchestrator | Saturday 28 February 2026 00:57:49 +0000 (0:00:01.343) 0:11:11.829 ***** 2026-02-28 00:58:37.462685 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-28 00:58:37.462690 | orchestrator | 2026-02-28 00:58:37.462695 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-28 00:58:37.462699 | orchestrator | Saturday 28 February 2026 00:57:49 +0000 (0:00:00.277) 0:11:12.107 ***** 2026-02-28 00:58:37.462704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:37.462710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:37.462715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:37.462720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:37.462725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:37.462729 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.462734 | orchestrator | 2026-02-28 00:58:37.462739 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-28 00:58:37.462744 | orchestrator | Saturday 28 February 2026 00:57:51 +0000 (0:00:01.358) 0:11:13.465 ***** 2026-02-28 00:58:37.462749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:37.462754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:37.462759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:37.462763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:37.462768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:37.462773 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.462778 | orchestrator | 2026-02-28 00:58:37.462783 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-28 00:58:37.462788 | orchestrator | Saturday 28 February 2026 00:57:51 +0000 (0:00:00.692) 0:11:14.157 ***** 2026-02-28 00:58:37.462793 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-28 00:58:37.462798 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-28 00:58:37.462803 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-28 00:58:37.462812 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-28 00:58:37.462817 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-28 00:58:37.462821 | orchestrator | 2026-02-28 00:58:37.462826 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-28 00:58:37.462830 | orchestrator | Saturday 28 February 2026 00:58:21 +0000 (0:00:30.164) 0:11:44.322 ***** 2026-02-28 00:58:37.462835 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.462840 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.462844 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.462849 | orchestrator | 2026-02-28 00:58:37.462853 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-28 00:58:37.462858 | orchestrator | Saturday 28 February 2026 00:58:22 +0000 (0:00:00.329) 0:11:44.652 ***** 2026-02-28 00:58:37.462863 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.462867 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.462872 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.462876 | orchestrator | 2026-02-28 00:58:37.462881 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-28 00:58:37.462885 | orchestrator | Saturday 28 February 2026 00:58:22 +0000 (0:00:00.335) 0:11:44.988 ***** 2026-02-28 00:58:37.462890 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.462895 | orchestrator | 2026-02-28 00:58:37.462903 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-28 00:58:37.462908 | orchestrator | Saturday 28 February 2026 00:58:23 +0000 (0:00:00.923) 0:11:45.911 ***** 2026-02-28 00:58:37.462912 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.462917 | orchestrator | 2026-02-28 00:58:37.462925 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-28 00:58:37.462929 | orchestrator | Saturday 28 February 2026 00:58:24 +0000 (0:00:00.578) 0:11:46.490 ***** 2026-02-28 00:58:37.462934 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.462938 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.462943 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.462948 | orchestrator | 2026-02-28 00:58:37.462952 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-28 00:58:37.462957 | orchestrator | Saturday 28 February 2026 00:58:25 +0000 (0:00:01.463) 0:11:47.953 ***** 2026-02-28 00:58:37.462962 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.462966 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.462971 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.462975 | orchestrator | 2026-02-28 00:58:37.462980 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-28 00:58:37.462984 | orchestrator | Saturday 28 February 2026 00:58:27 +0000 (0:00:01.662) 0:11:49.616 ***** 2026-02-28 00:58:37.462989 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:37.462993 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:37.462998 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:37.463003 | orchestrator | 2026-02-28 00:58:37.463007 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-28 00:58:37.463012 | orchestrator | Saturday 28 February 2026 00:58:29 +0000 (0:00:02.017) 0:11:51.633 ***** 2026-02-28 00:58:37.463016 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:37.463021 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:37.463026 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:37.463034 | orchestrator | 2026-02-28 00:58:37.463039 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:58:37.463043 | orchestrator | Saturday 28 February 2026 00:58:32 +0000 (0:00:02.901) 0:11:54.535 ***** 2026-02-28 00:58:37.463048 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.463052 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.463057 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.463061 | orchestrator | 2026-02-28 00:58:37.463066 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-28 00:58:37.463071 | orchestrator | Saturday 28 February 2026 00:58:32 +0000 (0:00:00.353) 0:11:54.889 ***** 2026-02-28 00:58:37.463075 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:37.463080 | orchestrator | 2026-02-28 00:58:37.463085 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-28 00:58:37.463089 | orchestrator | Saturday 28 February 2026 00:58:33 +0000 (0:00:00.577) 0:11:55.467 ***** 2026-02-28 00:58:37.463094 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.463098 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.463103 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.463107 | orchestrator | 2026-02-28 00:58:37.463112 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-28 00:58:37.463117 | orchestrator | Saturday 28 February 2026 00:58:33 +0000 (0:00:00.583) 0:11:56.051 ***** 2026-02-28 00:58:37.463122 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.463126 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:37.463131 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:37.463135 | orchestrator | 2026-02-28 00:58:37.463140 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-28 00:58:37.463145 | orchestrator | Saturday 28 February 2026 00:58:33 +0000 (0:00:00.328) 0:11:56.379 ***** 2026-02-28 00:58:37.463149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:37.463154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:37.463158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:37.463163 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:37.463167 | orchestrator | 2026-02-28 00:58:37.463172 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-28 00:58:37.463177 | orchestrator | Saturday 28 February 2026 00:58:34 +0000 (0:00:00.705) 0:11:57.085 ***** 2026-02-28 00:58:37.463181 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:37.463186 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:37.463191 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:37.463195 | orchestrator | 2026-02-28 00:58:37.463200 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:58:37.463205 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-28 00:58:37.463210 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-28 00:58:37.463215 | orchestrator | testbed-node-2 : ok=134  changed=34  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-28 00:58:37.463222 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-28 00:58:37.463227 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-28 00:58:37.463238 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-28 00:58:37.463246 | orchestrator | 2026-02-28 00:58:37.463251 | orchestrator | 2026-02-28 00:58:37.463255 | orchestrator | 2026-02-28 00:58:37.463260 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:58:37.463264 | orchestrator | Saturday 28 February 2026 00:58:34 +0000 (0:00:00.247) 0:11:57.333 ***** 2026-02-28 00:58:37.463269 | orchestrator | =============================================================================== 2026-02-28 00:58:37.463274 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 45.07s 2026-02-28 00:58:37.463278 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.33s 2026-02-28 00:58:37.463283 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 31.19s 2026-02-28 00:58:37.463287 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.16s 2026-02-28 00:58:37.463292 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.01s 2026-02-28 00:58:37.463296 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.09s 2026-02-28 00:58:37.463301 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.85s 2026-02-28 00:58:37.463305 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.63s 2026-02-28 00:58:37.463310 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.42s 2026-02-28 00:58:37.463314 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.24s 2026-02-28 00:58:37.463319 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.96s 2026-02-28 00:58:37.463324 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.56s 2026-02-28 00:58:37.463328 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.64s 2026-02-28 00:58:37.463333 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.03s 2026-02-28 00:58:37.463337 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 5.00s 2026-02-28 00:58:37.463342 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.67s 2026-02-28 00:58:37.463346 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.15s 2026-02-28 00:58:37.463351 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.07s 2026-02-28 00:58:37.463356 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.06s 2026-02-28 00:58:37.463360 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 4.04s 2026-02-28 00:58:37.463365 | orchestrator | 2026-02-28 00:58:37 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:37.463369 | orchestrator | 2026-02-28 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:40.496989 | orchestrator | 2026-02-28 00:58:40 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:58:40.497365 | orchestrator | 2026-02-28 00:58:40 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:40.500792 | orchestrator | 2026-02-28 00:58:40 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:40.500845 | orchestrator | 2026-02-28 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:43.560242 | orchestrator | 2026-02-28 00:58:43 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:58:43.562519 | orchestrator | 2026-02-28 00:58:43 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:43.564201 | orchestrator | 2026-02-28 00:58:43 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:43.564258 | orchestrator | 2026-02-28 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:46.610262 | orchestrator | 2026-02-28 00:58:46 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:58:46.613162 | orchestrator | 2026-02-28 00:58:46 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:46.615774 | orchestrator | 2026-02-28 00:58:46 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:46.615850 | orchestrator | 2026-02-28 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:49.668078 | orchestrator | 2026-02-28 00:58:49 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:58:49.670476 | orchestrator | 2026-02-28 00:58:49 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:49.673085 | orchestrator | 2026-02-28 00:58:49 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:49.673169 | orchestrator | 2026-02-28 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:52.733837 | orchestrator | 2026-02-28 00:58:52 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:58:52.734776 | orchestrator | 2026-02-28 00:58:52 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:52.736198 | orchestrator | 2026-02-28 00:58:52 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:52.736289 | orchestrator | 2026-02-28 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:55.789228 | orchestrator | 2026-02-28 00:58:55 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:58:55.791110 | orchestrator | 2026-02-28 00:58:55 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:55.793445 | orchestrator | 2026-02-28 00:58:55 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:55.793501 | orchestrator | 2026-02-28 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:58.849128 | orchestrator | 2026-02-28 00:58:58 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:58:58.850694 | orchestrator | 2026-02-28 00:58:58 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:58:58.852000 | orchestrator | 2026-02-28 00:58:58 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:58:58.852112 | orchestrator | 2026-02-28 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:01.896087 | orchestrator | 2026-02-28 00:59:01 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:01.898765 | orchestrator | 2026-02-28 00:59:01 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:59:01.901345 | orchestrator | 2026-02-28 00:59:01 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:59:01.901433 | orchestrator | 2026-02-28 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:04.952784 | orchestrator | 2026-02-28 00:59:04 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:04.952895 | orchestrator | 2026-02-28 00:59:04 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:59:04.952912 | orchestrator | 2026-02-28 00:59:04 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:59:04.952923 | orchestrator | 2026-02-28 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:08.009897 | orchestrator | 2026-02-28 00:59:08 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:08.010437 | orchestrator | 2026-02-28 00:59:08 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:59:08.011476 | orchestrator | 2026-02-28 00:59:08 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:59:08.011551 | orchestrator | 2026-02-28 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:11.080933 | orchestrator | 2026-02-28 00:59:11 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:11.082470 | orchestrator | 2026-02-28 00:59:11 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:59:11.084303 | orchestrator | 2026-02-28 00:59:11 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:59:11.084348 | orchestrator | 2026-02-28 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:14.153309 | orchestrator | 2026-02-28 00:59:14 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:14.154592 | orchestrator | 2026-02-28 00:59:14 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state STARTED 2026-02-28 00:59:14.155786 | orchestrator | 2026-02-28 00:59:14 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:59:14.155972 | orchestrator | 2026-02-28 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:17.203794 | orchestrator | 2026-02-28 00:59:17 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:17.207474 | orchestrator | 2026-02-28 00:59:17.207702 | orchestrator | 2026-02-28 00:59:17.207718 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:59:17.207726 | orchestrator | 2026-02-28 00:59:17.207733 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:59:17.207756 | orchestrator | Saturday 28 February 2026 00:56:28 +0000 (0:00:00.397) 0:00:00.397 ***** 2026-02-28 00:59:17.207762 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:17.207770 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:17.207777 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:17.207783 | orchestrator | 2026-02-28 00:59:17.207789 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:59:17.207796 | orchestrator | Saturday 28 February 2026 00:56:29 +0000 (0:00:00.387) 0:00:00.785 ***** 2026-02-28 00:59:17.207802 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-28 00:59:17.207809 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-28 00:59:17.207816 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-28 00:59:17.207822 | orchestrator | 2026-02-28 00:59:17.207828 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-28 00:59:17.207835 | orchestrator | 2026-02-28 00:59:17.207841 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-28 00:59:17.207848 | orchestrator | Saturday 28 February 2026 00:56:29 +0000 (0:00:00.525) 0:00:01.311 ***** 2026-02-28 00:59:17.207855 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:17.207862 | orchestrator | 2026-02-28 00:59:17.207868 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-28 00:59:17.207875 | orchestrator | Saturday 28 February 2026 00:56:30 +0000 (0:00:00.536) 0:00:01.847 ***** 2026-02-28 00:59:17.207881 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 00:59:17.207888 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 00:59:17.207894 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 00:59:17.207900 | orchestrator | 2026-02-28 00:59:17.207927 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-28 00:59:17.207934 | orchestrator | Saturday 28 February 2026 00:56:30 +0000 (0:00:00.743) 0:00:02.591 ***** 2026-02-28 00:59:17.207944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:17.207955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:17.207974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:17.207987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:17.207996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:17.208009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:17.208016 | orchestrator | 2026-02-28 00:59:17.208022 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-28 00:59:17.208029 | orchestrator | Saturday 28 February 2026 00:56:32 +0000 (0:00:01.867) 0:00:04.459 ***** 2026-02-28 00:59:17.208035 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:17.208042 | orchestrator | 2026-02-28 00:59:17.208048 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-28 00:59:17.208054 | orchestrator | Saturday 28 February 2026 00:56:33 +0000 (0:00:00.560) 0:00:05.020 ***** 2026-02-28 00:59:17.208071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:17.208078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:17.208089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:17.208096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:17.208111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:17.208119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:17.208130 | orchestrator | 2026-02-28 00:59:17.208137 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-28 00:59:17.208144 | orchestrator | Saturday 28 February 2026 00:56:36 +0000 (0:00:02.996) 0:00:08.016 ***** 2026-02-28 00:59:17.208150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:59:17.208157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:59:17.208164 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:17.208179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:59:17.208186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:59:17.208196 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:17.208203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:59:17.208209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:59:17.208216 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:17.208222 | orchestrator | 2026-02-28 00:59:17.208229 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-28 00:59:17.208235 | orchestrator | Saturday 28 February 2026 00:56:37 +0000 (0:00:01.346) 0:00:09.363 ***** 2026-02-28 00:59:17.208248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:59:17.208255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:59:17.208267 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:17.208273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:59:17.208295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:59:17.208302 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:17.208313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:59:17.208324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:59:17.208343 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:17.208350 | orchestrator | 2026-02-28 00:59:17.208357 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-28 00:59:17.208363 | orchestrator | Saturday 28 February 2026 00:56:39 +0000 (0:00:01.443) 0:00:10.807 ***** 2026-02-28 00:59:17.208371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:17.208378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:17.208385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:17.208401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:17.208412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:17.208421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:17.208428 | orchestrator | 2026-02-28 00:59:17.208434 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-28 00:59:17.208441 | orchestrator | Saturday 28 February 2026 00:56:41 +0000 (0:00:02.754) 0:00:13.561 ***** 2026-02-28 00:59:17.208448 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:17.208454 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:17.208460 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:17.208466 | orchestrator | 2026-02-28 00:59:17.208471 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-28 00:59:17.208478 | orchestrator | Saturday 28 February 2026 00:56:45 +0000 (0:00:03.536) 0:00:17.098 ***** 2026-02-28 00:59:17.208484 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:17.208490 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:17.208496 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:17.208503 | orchestrator | 2026-02-28 00:59:17.208509 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-28 00:59:17.208515 | orchestrator | Saturday 28 February 2026 00:56:47 +0000 (0:00:02.365) 0:00:19.463 ***** 2026-02-28 00:59:17.208536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:17.208542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:17.208549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:17.208556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:17.208567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:17.208581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:17.208587 | orchestrator | 2026-02-28 00:59:17.208593 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-28 00:59:17.208600 | orchestrator | Saturday 28 February 2026 00:56:49 +0000 (0:00:02.166) 0:00:21.629 ***** 2026-02-28 00:59:17.208606 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:17.208657 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:17.208665 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:17.208671 | orchestrator | 2026-02-28 00:59:17.208677 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-28 00:59:17.208684 | orchestrator | Saturday 28 February 2026 00:56:50 +0000 (0:00:00.316) 0:00:21.946 ***** 2026-02-28 00:59:17.208690 | orchestrator | 2026-02-28 00:59:17.208697 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-28 00:59:17.208703 | orchestrator | Saturday 28 February 2026 00:56:50 +0000 (0:00:00.079) 0:00:22.025 ***** 2026-02-28 00:59:17.208709 | orchestrator | 2026-02-28 00:59:17.208716 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-28 00:59:17.208722 | orchestrator | Saturday 28 February 2026 00:56:50 +0000 (0:00:00.072) 0:00:22.098 ***** 2026-02-28 00:59:17.208729 | orchestrator | 2026-02-28 00:59:17.208735 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-28 00:59:17.208742 | orchestrator | Saturday 28 February 2026 00:56:50 +0000 (0:00:00.073) 0:00:22.171 ***** 2026-02-28 00:59:17.208748 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:17.208755 | orchestrator | 2026-02-28 00:59:17.208761 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-28 00:59:17.208768 | orchestrator | Saturday 28 February 2026 00:56:51 +0000 (0:00:00.694) 0:00:22.866 ***** 2026-02-28 00:59:17.208775 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:17.208781 | orchestrator | 2026-02-28 00:59:17.208788 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-28 00:59:17.208795 | orchestrator | Saturday 28 February 2026 00:56:51 +0000 (0:00:00.202) 0:00:23.068 ***** 2026-02-28 00:59:17.208802 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:17.208808 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:17.208821 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:17.208827 | orchestrator | 2026-02-28 00:59:17.208834 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-28 00:59:17.208840 | orchestrator | Saturday 28 February 2026 00:57:53 +0000 (0:01:01.866) 0:01:24.934 ***** 2026-02-28 00:59:17.208847 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:17.208854 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:17.208860 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:17.208867 | orchestrator | 2026-02-28 00:59:17.208873 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-28 00:59:17.208880 | orchestrator | Saturday 28 February 2026 00:59:04 +0000 (0:01:10.904) 0:02:35.838 ***** 2026-02-28 00:59:17.208887 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:17.208893 | orchestrator | 2026-02-28 00:59:17.208900 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-28 00:59:17.208906 | orchestrator | Saturday 28 February 2026 00:59:04 +0000 (0:00:00.784) 0:02:36.623 ***** 2026-02-28 00:59:17.208913 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:17.208920 | orchestrator | 2026-02-28 00:59:17.208926 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-28 00:59:17.208933 | orchestrator | Saturday 28 February 2026 00:59:07 +0000 (0:00:02.664) 0:02:39.287 ***** 2026-02-28 00:59:17.208939 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:17.208946 | orchestrator | 2026-02-28 00:59:17.208953 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-28 00:59:17.208960 | orchestrator | Saturday 28 February 2026 00:59:10 +0000 (0:00:02.629) 0:02:41.917 ***** 2026-02-28 00:59:17.208966 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:17.208973 | orchestrator | 2026-02-28 00:59:17.208980 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-28 00:59:17.208986 | orchestrator | Saturday 28 February 2026 00:59:13 +0000 (0:00:03.192) 0:02:45.109 ***** 2026-02-28 00:59:17.208993 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:17.209000 | orchestrator | 2026-02-28 00:59:17.209011 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:59:17.209023 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 00:59:17.209031 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:59:17.209038 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:59:17.209044 | orchestrator | 2026-02-28 00:59:17.209050 | orchestrator | 2026-02-28 00:59:17.209056 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:59:17.209063 | orchestrator | Saturday 28 February 2026 00:59:16 +0000 (0:00:02.795) 0:02:47.904 ***** 2026-02-28 00:59:17.209069 | orchestrator | =============================================================================== 2026-02-28 00:59:17.209075 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 70.90s 2026-02-28 00:59:17.209081 | orchestrator | opensearch : Restart opensearch container ------------------------------ 61.87s 2026-02-28 00:59:17.209088 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.54s 2026-02-28 00:59:17.209095 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.19s 2026-02-28 00:59:17.209101 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.00s 2026-02-28 00:59:17.209108 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.80s 2026-02-28 00:59:17.209114 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.75s 2026-02-28 00:59:17.209121 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.66s 2026-02-28 00:59:17.209132 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.63s 2026-02-28 00:59:17.209139 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.37s 2026-02-28 00:59:17.209146 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.17s 2026-02-28 00:59:17.209152 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.87s 2026-02-28 00:59:17.209159 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.44s 2026-02-28 00:59:17.209165 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.35s 2026-02-28 00:59:17.209172 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.78s 2026-02-28 00:59:17.209178 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.74s 2026-02-28 00:59:17.209185 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.69s 2026-02-28 00:59:17.209191 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-02-28 00:59:17.209198 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-02-28 00:59:17.209204 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-02-28 00:59:17.209210 | orchestrator | 2026-02-28 00:59:17 | INFO  | Task c9faee9b-4874-43a9-bd14-e4ee90f11eb9 is in state SUCCESS 2026-02-28 00:59:17.209217 | orchestrator | 2026-02-28 00:59:17 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:59:17.209224 | orchestrator | 2026-02-28 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:20.256585 | orchestrator | 2026-02-28 00:59:20 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:20.257194 | orchestrator | 2026-02-28 00:59:20 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:59:20.257233 | orchestrator | 2026-02-28 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:23.301406 | orchestrator | 2026-02-28 00:59:23 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:23.303756 | orchestrator | 2026-02-28 00:59:23 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:59:23.303794 | orchestrator | 2026-02-28 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:26.342470 | orchestrator | 2026-02-28 00:59:26 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:26.344599 | orchestrator | 2026-02-28 00:59:26 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:59:26.344847 | orchestrator | 2026-02-28 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:29.389070 | orchestrator | 2026-02-28 00:59:29 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:29.390559 | orchestrator | 2026-02-28 00:59:29 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:59:29.390602 | orchestrator | 2026-02-28 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:32.435891 | orchestrator | 2026-02-28 00:59:32 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:32.438210 | orchestrator | 2026-02-28 00:59:32 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:59:32.438261 | orchestrator | 2026-02-28 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:35.483356 | orchestrator | 2026-02-28 00:59:35 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:35.485048 | orchestrator | 2026-02-28 00:59:35 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:59:35.485137 | orchestrator | 2026-02-28 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:38.530700 | orchestrator | 2026-02-28 00:59:38 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:38.531331 | orchestrator | 2026-02-28 00:59:38 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:59:38.531360 | orchestrator | 2026-02-28 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:41.577805 | orchestrator | 2026-02-28 00:59:41 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:41.578278 | orchestrator | 2026-02-28 00:59:41 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state STARTED 2026-02-28 00:59:41.578321 | orchestrator | 2026-02-28 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:44.629392 | orchestrator | 2026-02-28 00:59:44 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:44.635604 | orchestrator | 2026-02-28 00:59:44 | INFO  | Task 78c2351f-21d4-413b-938b-2f17c462612f is in state SUCCESS 2026-02-28 00:59:44.635718 | orchestrator | 2026-02-28 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:44.637317 | orchestrator | 2026-02-28 00:59:44.637582 | orchestrator | 2026-02-28 00:59:44.637601 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-02-28 00:59:44.637613 | orchestrator | 2026-02-28 00:59:44.637648 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-28 00:59:44.637660 | orchestrator | Saturday 28 February 2026 00:56:28 +0000 (0:00:00.103) 0:00:00.103 ***** 2026-02-28 00:59:44.637672 | orchestrator | ok: [localhost] => { 2026-02-28 00:59:44.637684 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-02-28 00:59:44.637696 | orchestrator | } 2026-02-28 00:59:44.637707 | orchestrator | 2026-02-28 00:59:44.637718 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-02-28 00:59:44.637729 | orchestrator | Saturday 28 February 2026 00:56:28 +0000 (0:00:00.057) 0:00:00.161 ***** 2026-02-28 00:59:44.637740 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-02-28 00:59:44.637753 | orchestrator | ...ignoring 2026-02-28 00:59:44.637764 | orchestrator | 2026-02-28 00:59:44.637775 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-02-28 00:59:44.637786 | orchestrator | Saturday 28 February 2026 00:56:31 +0000 (0:00:03.008) 0:00:03.169 ***** 2026-02-28 00:59:44.637797 | orchestrator | skipping: [localhost] 2026-02-28 00:59:44.637808 | orchestrator | 2026-02-28 00:59:44.637819 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-02-28 00:59:44.637830 | orchestrator | Saturday 28 February 2026 00:56:31 +0000 (0:00:00.056) 0:00:03.226 ***** 2026-02-28 00:59:44.637841 | orchestrator | ok: [localhost] 2026-02-28 00:59:44.637852 | orchestrator | 2026-02-28 00:59:44.637863 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:59:44.637874 | orchestrator | 2026-02-28 00:59:44.637885 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:59:44.637896 | orchestrator | Saturday 28 February 2026 00:56:31 +0000 (0:00:00.149) 0:00:03.376 ***** 2026-02-28 00:59:44.637906 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:44.637917 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:44.637928 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:44.637939 | orchestrator | 2026-02-28 00:59:44.637951 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:59:44.637962 | orchestrator | Saturday 28 February 2026 00:56:32 +0000 (0:00:00.313) 0:00:03.689 ***** 2026-02-28 00:59:44.637997 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-28 00:59:44.638010 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-28 00:59:44.638081 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-28 00:59:44.638093 | orchestrator | 2026-02-28 00:59:44.638103 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-28 00:59:44.638114 | orchestrator | 2026-02-28 00:59:44.638127 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-28 00:59:44.638140 | orchestrator | Saturday 28 February 2026 00:56:32 +0000 (0:00:00.630) 0:00:04.320 ***** 2026-02-28 00:59:44.638152 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-28 00:59:44.638165 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-28 00:59:44.638177 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-28 00:59:44.638190 | orchestrator | 2026-02-28 00:59:44.638202 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-28 00:59:44.638215 | orchestrator | Saturday 28 February 2026 00:56:33 +0000 (0:00:00.388) 0:00:04.708 ***** 2026-02-28 00:59:44.638227 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:44.638241 | orchestrator | 2026-02-28 00:59:44.638253 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-28 00:59:44.638281 | orchestrator | Saturday 28 February 2026 00:56:33 +0000 (0:00:00.773) 0:00:05.482 ***** 2026-02-28 00:59:44.638317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:44.638336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:44.638366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:44.638381 | orchestrator | 2026-02-28 00:59:44.638403 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-28 00:59:44.638416 | orchestrator | Saturday 28 February 2026 00:56:37 +0000 (0:00:03.464) 0:00:08.946 ***** 2026-02-28 00:59:44.638428 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:44.638441 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.638453 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.638465 | orchestrator | 2026-02-28 00:59:44.638477 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-28 00:59:44.638491 | orchestrator | Saturday 28 February 2026 00:56:38 +0000 (0:00:01.035) 0:00:09.982 ***** 2026-02-28 00:59:44.638501 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.638512 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.638523 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:44.638534 | orchestrator | 2026-02-28 00:59:44.638544 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-28 00:59:44.638563 | orchestrator | Saturday 28 February 2026 00:56:40 +0000 (0:00:01.901) 0:00:11.884 ***** 2026-02-28 00:59:44.638575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:44.638600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:44.638614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:44.638666 | orchestrator | 2026-02-28 00:59:44.638678 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-28 00:59:44.638689 | orchestrator | Saturday 28 February 2026 00:56:44 +0000 (0:00:04.349) 0:00:16.233 ***** 2026-02-28 00:59:44.638700 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.638711 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.638722 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:44.638733 | orchestrator | 2026-02-28 00:59:44.638744 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-28 00:59:44.638755 | orchestrator | Saturday 28 February 2026 00:56:45 +0000 (0:00:01.116) 0:00:17.350 ***** 2026-02-28 00:59:44.638766 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:44.638776 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:44.638793 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:44.638804 | orchestrator | 2026-02-28 00:59:44.638815 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-28 00:59:44.638826 | orchestrator | Saturday 28 February 2026 00:56:50 +0000 (0:00:04.802) 0:00:22.153 ***** 2026-02-28 00:59:44.638837 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:44.638848 | orchestrator | 2026-02-28 00:59:44.638859 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-28 00:59:44.638870 | orchestrator | Saturday 28 February 2026 00:56:51 +0000 (0:00:00.548) 0:00:22.701 ***** 2026-02-28 00:59:44.638891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:44.638909 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:44.638927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:44.638939 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.638959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:44.638979 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.638990 | orchestrator | 2026-02-28 00:59:44.639000 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-28 00:59:44.639012 | orchestrator | Saturday 28 February 2026 00:56:54 +0000 (0:00:03.682) 0:00:26.383 ***** 2026-02-28 00:59:44.639023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:44.639047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:44.639075 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.639087 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:44.639098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:44.639110 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.639121 | orchestrator | 2026-02-28 00:59:44.639132 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-28 00:59:44.639142 | orchestrator | Saturday 28 February 2026 00:56:57 +0000 (0:00:03.036) 0:00:29.419 ***** 2026-02-28 00:59:44.639164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:44.639183 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.639195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:44.639207 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.639223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:44.639242 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:44.639253 | orchestrator | 2026-02-28 00:59:44.639263 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-28 00:59:44.639275 | orchestrator | Saturday 28 February 2026 00:57:01 +0000 (0:00:03.630) 0:00:33.050 ***** 2026-02-28 00:59:44.639294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:44.639314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:44.639342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:44.639355 | orchestrator | 2026-02-28 00:59:44.639366 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-28 00:59:44.639377 | orchestrator | Saturday 28 February 2026 00:57:04 +0000 (0:00:03.324) 0:00:36.375 ***** 2026-02-28 00:59:44.639387 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:44.639398 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:44.639409 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:44.639420 | orchestrator | 2026-02-28 00:59:44.639431 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-28 00:59:44.639442 | orchestrator | Saturday 28 February 2026 00:57:05 +0000 (0:00:00.790) 0:00:37.166 ***** 2026-02-28 00:59:44.639453 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:44.639464 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:44.639474 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:44.639486 | orchestrator | 2026-02-28 00:59:44.639497 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-28 00:59:44.639508 | orchestrator | Saturday 28 February 2026 00:57:06 +0000 (0:00:00.467) 0:00:37.634 ***** 2026-02-28 00:59:44.639519 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:44.639530 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:44.639541 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:44.639551 | orchestrator | 2026-02-28 00:59:44.639562 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-28 00:59:44.639573 | orchestrator | Saturday 28 February 2026 00:57:06 +0000 (0:00:00.296) 0:00:37.931 ***** 2026-02-28 00:59:44.639585 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-28 00:59:44.639603 | orchestrator | ...ignoring 2026-02-28 00:59:44.639673 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-28 00:59:44.639685 | orchestrator | ...ignoring 2026-02-28 00:59:44.639697 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-28 00:59:44.639708 | orchestrator | ...ignoring 2026-02-28 00:59:44.639719 | orchestrator | 2026-02-28 00:59:44.639730 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-28 00:59:44.639741 | orchestrator | Saturday 28 February 2026 00:57:17 +0000 (0:00:10.826) 0:00:48.758 ***** 2026-02-28 00:59:44.639752 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:44.639762 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:44.639774 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:44.639784 | orchestrator | 2026-02-28 00:59:44.639795 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-28 00:59:44.639806 | orchestrator | Saturday 28 February 2026 00:57:17 +0000 (0:00:00.423) 0:00:49.181 ***** 2026-02-28 00:59:44.639817 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:44.639828 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.639839 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.639849 | orchestrator | 2026-02-28 00:59:44.639860 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-28 00:59:44.639871 | orchestrator | Saturday 28 February 2026 00:57:18 +0000 (0:00:00.590) 0:00:49.772 ***** 2026-02-28 00:59:44.639882 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:44.639893 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.639904 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.639915 | orchestrator | 2026-02-28 00:59:44.639926 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-28 00:59:44.639937 | orchestrator | Saturday 28 February 2026 00:57:18 +0000 (0:00:00.439) 0:00:50.211 ***** 2026-02-28 00:59:44.639947 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:44.639958 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.639969 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.639985 | orchestrator | 2026-02-28 00:59:44.640003 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-28 00:59:44.640031 | orchestrator | Saturday 28 February 2026 00:57:19 +0000 (0:00:00.408) 0:00:50.620 ***** 2026-02-28 00:59:44.640050 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:44.640068 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:44.640084 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:44.640101 | orchestrator | 2026-02-28 00:59:44.640118 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-28 00:59:44.640135 | orchestrator | Saturday 28 February 2026 00:57:19 +0000 (0:00:00.408) 0:00:51.029 ***** 2026-02-28 00:59:44.640152 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:44.640167 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.640184 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.640201 | orchestrator | 2026-02-28 00:59:44.640219 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-28 00:59:44.640239 | orchestrator | Saturday 28 February 2026 00:57:20 +0000 (0:00:00.561) 0:00:51.591 ***** 2026-02-28 00:59:44.640258 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.640279 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.640298 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-28 00:59:44.640319 | orchestrator | 2026-02-28 00:59:44.640338 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-28 00:59:44.640358 | orchestrator | Saturday 28 February 2026 00:57:20 +0000 (0:00:00.346) 0:00:51.937 ***** 2026-02-28 00:59:44.640374 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:44.640385 | orchestrator | 2026-02-28 00:59:44.640396 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-28 00:59:44.640417 | orchestrator | Saturday 28 February 2026 00:57:30 +0000 (0:00:10.196) 0:01:02.133 ***** 2026-02-28 00:59:44.640428 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:44.640439 | orchestrator | 2026-02-28 00:59:44.640450 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-28 00:59:44.640461 | orchestrator | Saturday 28 February 2026 00:57:30 +0000 (0:00:00.144) 0:01:02.278 ***** 2026-02-28 00:59:44.640472 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:44.640482 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.640493 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.640504 | orchestrator | 2026-02-28 00:59:44.640515 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-28 00:59:44.640526 | orchestrator | Saturday 28 February 2026 00:57:31 +0000 (0:00:01.038) 0:01:03.317 ***** 2026-02-28 00:59:44.640537 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:44.640548 | orchestrator | 2026-02-28 00:59:44.640559 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-28 00:59:44.640570 | orchestrator | Saturday 28 February 2026 00:57:40 +0000 (0:00:08.688) 0:01:12.005 ***** 2026-02-28 00:59:44.640581 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2026-02-28 00:59:44.640592 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:44.640603 | orchestrator | 2026-02-28 00:59:44.640614 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-28 00:59:44.640695 | orchestrator | Saturday 28 February 2026 00:57:47 +0000 (0:00:07.456) 0:01:19.462 ***** 2026-02-28 00:59:44.640707 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:44.640718 | orchestrator | 2026-02-28 00:59:44.640729 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-28 00:59:44.640739 | orchestrator | Saturday 28 February 2026 00:57:50 +0000 (0:00:02.683) 0:01:22.146 ***** 2026-02-28 00:59:44.640750 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:44.640761 | orchestrator | 2026-02-28 00:59:44.640772 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-28 00:59:44.640783 | orchestrator | Saturday 28 February 2026 00:57:50 +0000 (0:00:00.136) 0:01:22.282 ***** 2026-02-28 00:59:44.640794 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:44.640805 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.640816 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.640827 | orchestrator | 2026-02-28 00:59:44.640845 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-28 00:59:44.640856 | orchestrator | Saturday 28 February 2026 00:57:51 +0000 (0:00:00.338) 0:01:22.621 ***** 2026-02-28 00:59:44.640867 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:44.640878 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-28 00:59:44.640889 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:44.640899 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:44.640910 | orchestrator | 2026-02-28 00:59:44.640921 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-28 00:59:44.640932 | orchestrator | skipping: no hosts matched 2026-02-28 00:59:44.640942 | orchestrator | 2026-02-28 00:59:44.640953 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-28 00:59:44.640964 | orchestrator | 2026-02-28 00:59:44.640975 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-28 00:59:44.640986 | orchestrator | Saturday 28 February 2026 00:57:51 +0000 (0:00:00.645) 0:01:23.266 ***** 2026-02-28 00:59:44.640996 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:44.641007 | orchestrator | 2026-02-28 00:59:44.641018 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-28 00:59:44.641029 | orchestrator | Saturday 28 February 2026 00:58:11 +0000 (0:00:20.236) 0:01:43.502 ***** 2026-02-28 00:59:44.641039 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:44.641057 | orchestrator | 2026-02-28 00:59:44.641068 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-28 00:59:44.641079 | orchestrator | Saturday 28 February 2026 00:58:27 +0000 (0:00:15.610) 0:01:59.113 ***** 2026-02-28 00:59:44.641090 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:44.641100 | orchestrator | 2026-02-28 00:59:44.641111 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-28 00:59:44.641122 | orchestrator | 2026-02-28 00:59:44.641133 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-28 00:59:44.641144 | orchestrator | Saturday 28 February 2026 00:58:30 +0000 (0:00:02.576) 0:02:01.690 ***** 2026-02-28 00:59:44.641155 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:44.641166 | orchestrator | 2026-02-28 00:59:44.641186 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-28 00:59:44.641198 | orchestrator | Saturday 28 February 2026 00:58:48 +0000 (0:00:18.784) 0:02:20.474 ***** 2026-02-28 00:59:44.641209 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:44.641220 | orchestrator | 2026-02-28 00:59:44.641231 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-28 00:59:44.641241 | orchestrator | Saturday 28 February 2026 00:59:04 +0000 (0:00:15.658) 0:02:36.133 ***** 2026-02-28 00:59:44.641252 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:44.641263 | orchestrator | 2026-02-28 00:59:44.641274 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-28 00:59:44.641285 | orchestrator | 2026-02-28 00:59:44.641296 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-28 00:59:44.641307 | orchestrator | Saturday 28 February 2026 00:59:07 +0000 (0:00:02.854) 0:02:38.988 ***** 2026-02-28 00:59:44.641318 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:44.641328 | orchestrator | 2026-02-28 00:59:44.641339 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-28 00:59:44.641350 | orchestrator | Saturday 28 February 2026 00:59:22 +0000 (0:00:15.147) 0:02:54.135 ***** 2026-02-28 00:59:44.641361 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:44.641371 | orchestrator | 2026-02-28 00:59:44.641382 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-28 00:59:44.641393 | orchestrator | Saturday 28 February 2026 00:59:27 +0000 (0:00:04.651) 0:02:58.787 ***** 2026-02-28 00:59:44.641404 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:44.641414 | orchestrator | 2026-02-28 00:59:44.641425 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-28 00:59:44.641436 | orchestrator | 2026-02-28 00:59:44.641446 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-28 00:59:44.641458 | orchestrator | Saturday 28 February 2026 00:59:30 +0000 (0:00:02.940) 0:03:01.728 ***** 2026-02-28 00:59:44.641468 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:44.641479 | orchestrator | 2026-02-28 00:59:44.641490 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-28 00:59:44.641501 | orchestrator | Saturday 28 February 2026 00:59:30 +0000 (0:00:00.617) 0:03:02.346 ***** 2026-02-28 00:59:44.641512 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.641522 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.641533 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:44.641544 | orchestrator | 2026-02-28 00:59:44.641555 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-28 00:59:44.641566 | orchestrator | Saturday 28 February 2026 00:59:33 +0000 (0:00:02.461) 0:03:04.807 ***** 2026-02-28 00:59:44.641577 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.641588 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.641598 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:44.641609 | orchestrator | 2026-02-28 00:59:44.641640 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-28 00:59:44.641652 | orchestrator | Saturday 28 February 2026 00:59:35 +0000 (0:00:02.349) 0:03:07.157 ***** 2026-02-28 00:59:44.641670 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.641681 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.641692 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:44.641703 | orchestrator | 2026-02-28 00:59:44.641714 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-28 00:59:44.641725 | orchestrator | Saturday 28 February 2026 00:59:37 +0000 (0:00:02.305) 0:03:09.462 ***** 2026-02-28 00:59:44.641736 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.641746 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.641757 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:44.641769 | orchestrator | 2026-02-28 00:59:44.641779 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-28 00:59:44.641790 | orchestrator | Saturday 28 February 2026 00:59:40 +0000 (0:00:02.471) 0:03:11.934 ***** 2026-02-28 00:59:44.641807 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:44.641818 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:44.641829 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:44.641840 | orchestrator | 2026-02-28 00:59:44.641851 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-28 00:59:44.641862 | orchestrator | Saturday 28 February 2026 00:59:43 +0000 (0:00:03.313) 0:03:15.247 ***** 2026-02-28 00:59:44.641872 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:44.641883 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:44.641894 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:44.641905 | orchestrator | 2026-02-28 00:59:44.641915 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:59:44.641927 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-28 00:59:44.641938 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-28 00:59:44.641950 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-28 00:59:44.641961 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-28 00:59:44.641972 | orchestrator | 2026-02-28 00:59:44.641983 | orchestrator | 2026-02-28 00:59:44.641994 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:59:44.642005 | orchestrator | Saturday 28 February 2026 00:59:43 +0000 (0:00:00.239) 0:03:15.486 ***** 2026-02-28 00:59:44.642049 | orchestrator | =============================================================================== 2026-02-28 00:59:44.642063 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 39.02s 2026-02-28 00:59:44.642081 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.27s 2026-02-28 00:59:44.642092 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 15.15s 2026-02-28 00:59:44.642103 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.83s 2026-02-28 00:59:44.642114 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.20s 2026-02-28 00:59:44.642125 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.69s 2026-02-28 00:59:44.642136 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.46s 2026-02-28 00:59:44.642147 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.43s 2026-02-28 00:59:44.642158 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.80s 2026-02-28 00:59:44.642169 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.65s 2026-02-28 00:59:44.642179 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.35s 2026-02-28 00:59:44.642197 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.68s 2026-02-28 00:59:44.642208 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.63s 2026-02-28 00:59:44.642219 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.46s 2026-02-28 00:59:44.642230 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.32s 2026-02-28 00:59:44.642241 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.31s 2026-02-28 00:59:44.642251 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.04s 2026-02-28 00:59:44.642262 | orchestrator | Check MariaDB service --------------------------------------------------- 3.01s 2026-02-28 00:59:44.642273 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.94s 2026-02-28 00:59:44.642284 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.68s 2026-02-28 00:59:47.708960 | orchestrator | 2026-02-28 00:59:47 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 00:59:47.711764 | orchestrator | 2026-02-28 00:59:47 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:47.715196 | orchestrator | 2026-02-28 00:59:47 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 00:59:47.715444 | orchestrator | 2026-02-28 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:50.759772 | orchestrator | 2026-02-28 00:59:50 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 00:59:50.759867 | orchestrator | 2026-02-28 00:59:50 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:50.760098 | orchestrator | 2026-02-28 00:59:50 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 00:59:50.760116 | orchestrator | 2026-02-28 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:53.805283 | orchestrator | 2026-02-28 00:59:53 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 00:59:53.805424 | orchestrator | 2026-02-28 00:59:53 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:53.806772 | orchestrator | 2026-02-28 00:59:53 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 00:59:53.806820 | orchestrator | 2026-02-28 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:56.844718 | orchestrator | 2026-02-28 00:59:56 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 00:59:56.848979 | orchestrator | 2026-02-28 00:59:56 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:56.851395 | orchestrator | 2026-02-28 00:59:56 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 00:59:56.851451 | orchestrator | 2026-02-28 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:59.899070 | orchestrator | 2026-02-28 00:59:59 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 00:59:59.899193 | orchestrator | 2026-02-28 00:59:59 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 00:59:59.899914 | orchestrator | 2026-02-28 00:59:59 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 00:59:59.899967 | orchestrator | 2026-02-28 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:02.935251 | orchestrator | 2026-02-28 01:00:02 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:02.936409 | orchestrator | 2026-02-28 01:00:02 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:02.938337 | orchestrator | 2026-02-28 01:00:02 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:02.938382 | orchestrator | 2026-02-28 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:05.973311 | orchestrator | 2026-02-28 01:00:05 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:05.975513 | orchestrator | 2026-02-28 01:00:05 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:05.977033 | orchestrator | 2026-02-28 01:00:05 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:05.977179 | orchestrator | 2026-02-28 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:09.021753 | orchestrator | 2026-02-28 01:00:09 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:09.030215 | orchestrator | 2026-02-28 01:00:09 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:09.030298 | orchestrator | 2026-02-28 01:00:09 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:09.030309 | orchestrator | 2026-02-28 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:12.062002 | orchestrator | 2026-02-28 01:00:12 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:12.062314 | orchestrator | 2026-02-28 01:00:12 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:12.064481 | orchestrator | 2026-02-28 01:00:12 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:12.064520 | orchestrator | 2026-02-28 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:15.100828 | orchestrator | 2026-02-28 01:00:15 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:15.101465 | orchestrator | 2026-02-28 01:00:15 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:15.102970 | orchestrator | 2026-02-28 01:00:15 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:15.103025 | orchestrator | 2026-02-28 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:18.142677 | orchestrator | 2026-02-28 01:00:18 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:18.143703 | orchestrator | 2026-02-28 01:00:18 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:18.146240 | orchestrator | 2026-02-28 01:00:18 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:18.146287 | orchestrator | 2026-02-28 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:21.189253 | orchestrator | 2026-02-28 01:00:21 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:21.190391 | orchestrator | 2026-02-28 01:00:21 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:21.192432 | orchestrator | 2026-02-28 01:00:21 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:21.192507 | orchestrator | 2026-02-28 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:24.235757 | orchestrator | 2026-02-28 01:00:24 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:24.239035 | orchestrator | 2026-02-28 01:00:24 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:24.242750 | orchestrator | 2026-02-28 01:00:24 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:24.242820 | orchestrator | 2026-02-28 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:27.287558 | orchestrator | 2026-02-28 01:00:27 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:27.289475 | orchestrator | 2026-02-28 01:00:27 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:27.292034 | orchestrator | 2026-02-28 01:00:27 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:27.292092 | orchestrator | 2026-02-28 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:30.344905 | orchestrator | 2026-02-28 01:00:30 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:30.346882 | orchestrator | 2026-02-28 01:00:30 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:30.351035 | orchestrator | 2026-02-28 01:00:30 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:30.351142 | orchestrator | 2026-02-28 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:33.405117 | orchestrator | 2026-02-28 01:00:33 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:33.407437 | orchestrator | 2026-02-28 01:00:33 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:33.409987 | orchestrator | 2026-02-28 01:00:33 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:33.410136 | orchestrator | 2026-02-28 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:36.454222 | orchestrator | 2026-02-28 01:00:36 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:36.455710 | orchestrator | 2026-02-28 01:00:36 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:36.457930 | orchestrator | 2026-02-28 01:00:36 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:36.457967 | orchestrator | 2026-02-28 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:39.515939 | orchestrator | 2026-02-28 01:00:39 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:39.517667 | orchestrator | 2026-02-28 01:00:39 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:39.519377 | orchestrator | 2026-02-28 01:00:39 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:39.519416 | orchestrator | 2026-02-28 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:42.570449 | orchestrator | 2026-02-28 01:00:42 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:42.572885 | orchestrator | 2026-02-28 01:00:42 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:42.574673 | orchestrator | 2026-02-28 01:00:42 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:42.574758 | orchestrator | 2026-02-28 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:45.620269 | orchestrator | 2026-02-28 01:00:45 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:45.622454 | orchestrator | 2026-02-28 01:00:45 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:45.624721 | orchestrator | 2026-02-28 01:00:45 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:45.624803 | orchestrator | 2026-02-28 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:48.690426 | orchestrator | 2026-02-28 01:00:48 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:48.692367 | orchestrator | 2026-02-28 01:00:48 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:48.693867 | orchestrator | 2026-02-28 01:00:48 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:48.693921 | orchestrator | 2026-02-28 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:51.750377 | orchestrator | 2026-02-28 01:00:51 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:51.752767 | orchestrator | 2026-02-28 01:00:51 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state STARTED 2026-02-28 01:00:51.753568 | orchestrator | 2026-02-28 01:00:51 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:51.753597 | orchestrator | 2026-02-28 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:54.802574 | orchestrator | 2026-02-28 01:00:54 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:54.803167 | orchestrator | 2026-02-28 01:00:54 | INFO  | Task f46b21a3-e4c6-49f9-ba8d-7a721ba765ba is in state SUCCESS 2026-02-28 01:00:54.805308 | orchestrator | 2026-02-28 01:00:54.805530 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-28 01:00:54.805554 | orchestrator | 2.16.14 2026-02-28 01:00:54.805571 | orchestrator | 2026-02-28 01:00:54.805586 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-28 01:00:54.805601 | orchestrator | 2026-02-28 01:00:54.806350 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-28 01:00:54.806395 | orchestrator | Saturday 28 February 2026 00:58:40 +0000 (0:00:00.628) 0:00:00.628 ***** 2026-02-28 01:00:54.806412 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:00:54.806430 | orchestrator | 2026-02-28 01:00:54.806446 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-28 01:00:54.806459 | orchestrator | Saturday 28 February 2026 00:58:41 +0000 (0:00:00.689) 0:00:01.317 ***** 2026-02-28 01:00:54.806468 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:00:54.806478 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:00:54.806487 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:00:54.806495 | orchestrator | 2026-02-28 01:00:54.806505 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-28 01:00:54.806514 | orchestrator | Saturday 28 February 2026 00:58:41 +0000 (0:00:00.763) 0:00:02.081 ***** 2026-02-28 01:00:54.806523 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:00:54.806532 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:00:54.806541 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:00:54.806550 | orchestrator | 2026-02-28 01:00:54.806559 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-28 01:00:54.806567 | orchestrator | Saturday 28 February 2026 00:58:42 +0000 (0:00:00.338) 0:00:02.420 ***** 2026-02-28 01:00:54.806576 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:00:54.806585 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:00:54.806657 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:00:54.806667 | orchestrator | 2026-02-28 01:00:54.806676 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-28 01:00:54.806684 | orchestrator | Saturday 28 February 2026 00:58:43 +0000 (0:00:00.875) 0:00:03.296 ***** 2026-02-28 01:00:54.806694 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:00:54.806703 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:00:54.806711 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:00:54.806720 | orchestrator | 2026-02-28 01:00:54.806729 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-28 01:00:54.806766 | orchestrator | Saturday 28 February 2026 00:58:43 +0000 (0:00:00.312) 0:00:03.609 ***** 2026-02-28 01:00:54.806776 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:00:54.806784 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:00:54.806793 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:00:54.806802 | orchestrator | 2026-02-28 01:00:54.806811 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-28 01:00:54.806820 | orchestrator | Saturday 28 February 2026 00:58:43 +0000 (0:00:00.309) 0:00:03.918 ***** 2026-02-28 01:00:54.806829 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:00:54.806837 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:00:54.806846 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:00:54.806854 | orchestrator | 2026-02-28 01:00:54.806864 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-28 01:00:54.806878 | orchestrator | Saturday 28 February 2026 00:58:44 +0000 (0:00:00.327) 0:00:04.246 ***** 2026-02-28 01:00:54.806900 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.806919 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.806934 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.806948 | orchestrator | 2026-02-28 01:00:54.806963 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-28 01:00:54.806977 | orchestrator | Saturday 28 February 2026 00:58:44 +0000 (0:00:00.521) 0:00:04.768 ***** 2026-02-28 01:00:54.806990 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:00:54.807005 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:00:54.807019 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:00:54.807034 | orchestrator | 2026-02-28 01:00:54.807049 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-28 01:00:54.807065 | orchestrator | Saturday 28 February 2026 00:58:44 +0000 (0:00:00.298) 0:00:05.066 ***** 2026-02-28 01:00:54.807080 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 01:00:54.807093 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 01:00:54.807103 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 01:00:54.807113 | orchestrator | 2026-02-28 01:00:54.807122 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-28 01:00:54.807131 | orchestrator | Saturday 28 February 2026 00:58:45 +0000 (0:00:00.675) 0:00:05.742 ***** 2026-02-28 01:00:54.807153 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:00:54.807163 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:00:54.807171 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:00:54.807180 | orchestrator | 2026-02-28 01:00:54.807189 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-28 01:00:54.807198 | orchestrator | Saturday 28 February 2026 00:58:45 +0000 (0:00:00.440) 0:00:06.183 ***** 2026-02-28 01:00:54.807207 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 01:00:54.807215 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 01:00:54.807224 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 01:00:54.807233 | orchestrator | 2026-02-28 01:00:54.807242 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-28 01:00:54.807250 | orchestrator | Saturday 28 February 2026 00:58:48 +0000 (0:00:02.320) 0:00:08.503 ***** 2026-02-28 01:00:54.807259 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-28 01:00:54.807268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-28 01:00:54.807283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-28 01:00:54.807296 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.807317 | orchestrator | 2026-02-28 01:00:54.807408 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-28 01:00:54.807428 | orchestrator | Saturday 28 February 2026 00:58:48 +0000 (0:00:00.662) 0:00:09.166 ***** 2026-02-28 01:00:54.807460 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.807478 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.807488 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.807497 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.807506 | orchestrator | 2026-02-28 01:00:54.807515 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-28 01:00:54.807524 | orchestrator | Saturday 28 February 2026 00:58:49 +0000 (0:00:00.911) 0:00:10.078 ***** 2026-02-28 01:00:54.807535 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.807547 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.807557 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.807566 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.807575 | orchestrator | 2026-02-28 01:00:54.807584 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-28 01:00:54.807593 | orchestrator | Saturday 28 February 2026 00:58:50 +0000 (0:00:00.376) 0:00:10.454 ***** 2026-02-28 01:00:54.807611 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '62f635255c14', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-28 00:58:46.732213', 'end': '2026-02-28 00:58:46.776568', 'delta': '0:00:00.044355', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['62f635255c14'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-28 01:00:54.807683 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '4e96d824daf8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-28 00:58:47.539728', 'end': '2026-02-28 00:58:47.577347', 'delta': '0:00:00.037619', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4e96d824daf8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-28 01:00:54.807742 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e3c951244735', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-28 00:58:48.137760', 'end': '2026-02-28 00:58:48.176417', 'delta': '0:00:00.038657', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3c951244735'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-28 01:00:54.807753 | orchestrator | 2026-02-28 01:00:54.807763 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-28 01:00:54.807772 | orchestrator | Saturday 28 February 2026 00:58:50 +0000 (0:00:00.225) 0:00:10.680 ***** 2026-02-28 01:00:54.807781 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:00:54.807795 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:00:54.807816 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:00:54.807832 | orchestrator | 2026-02-28 01:00:54.807847 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-28 01:00:54.807861 | orchestrator | Saturday 28 February 2026 00:58:51 +0000 (0:00:00.532) 0:00:11.213 ***** 2026-02-28 01:00:54.807874 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-28 01:00:54.807888 | orchestrator | 2026-02-28 01:00:54.807902 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-28 01:00:54.807916 | orchestrator | Saturday 28 February 2026 00:58:52 +0000 (0:00:01.660) 0:00:12.873 ***** 2026-02-28 01:00:54.807930 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.807944 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.807958 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.807972 | orchestrator | 2026-02-28 01:00:54.807988 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-28 01:00:54.808004 | orchestrator | Saturday 28 February 2026 00:58:53 +0000 (0:00:00.336) 0:00:13.210 ***** 2026-02-28 01:00:54.808018 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.808030 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.808039 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.808047 | orchestrator | 2026-02-28 01:00:54.808056 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-28 01:00:54.808065 | orchestrator | Saturday 28 February 2026 00:58:53 +0000 (0:00:00.415) 0:00:13.626 ***** 2026-02-28 01:00:54.808073 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.808082 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.808091 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.808099 | orchestrator | 2026-02-28 01:00:54.808108 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-28 01:00:54.808117 | orchestrator | Saturday 28 February 2026 00:58:54 +0000 (0:00:00.598) 0:00:14.225 ***** 2026-02-28 01:00:54.808126 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:00:54.808134 | orchestrator | 2026-02-28 01:00:54.808143 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-28 01:00:54.808152 | orchestrator | Saturday 28 February 2026 00:58:54 +0000 (0:00:00.139) 0:00:14.364 ***** 2026-02-28 01:00:54.808161 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.808169 | orchestrator | 2026-02-28 01:00:54.808178 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-28 01:00:54.808187 | orchestrator | Saturday 28 February 2026 00:58:54 +0000 (0:00:00.249) 0:00:14.614 ***** 2026-02-28 01:00:54.808205 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.808214 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.808223 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.808232 | orchestrator | 2026-02-28 01:00:54.808241 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-28 01:00:54.808250 | orchestrator | Saturday 28 February 2026 00:58:54 +0000 (0:00:00.349) 0:00:14.963 ***** 2026-02-28 01:00:54.808259 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.808267 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.808276 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.808285 | orchestrator | 2026-02-28 01:00:54.808294 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-28 01:00:54.808303 | orchestrator | Saturday 28 February 2026 00:58:55 +0000 (0:00:00.450) 0:00:15.414 ***** 2026-02-28 01:00:54.808318 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.808327 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.808336 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.808345 | orchestrator | 2026-02-28 01:00:54.808354 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-28 01:00:54.808363 | orchestrator | Saturday 28 February 2026 00:58:55 +0000 (0:00:00.532) 0:00:15.947 ***** 2026-02-28 01:00:54.808371 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.808380 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.808389 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.808398 | orchestrator | 2026-02-28 01:00:54.808407 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-28 01:00:54.808416 | orchestrator | Saturday 28 February 2026 00:58:56 +0000 (0:00:00.355) 0:00:16.302 ***** 2026-02-28 01:00:54.808424 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.808433 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.808442 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.808451 | orchestrator | 2026-02-28 01:00:54.808459 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-28 01:00:54.808468 | orchestrator | Saturday 28 February 2026 00:58:56 +0000 (0:00:00.343) 0:00:16.645 ***** 2026-02-28 01:00:54.808477 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.808486 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.808495 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.808540 | orchestrator | 2026-02-28 01:00:54.808550 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-28 01:00:54.808559 | orchestrator | Saturday 28 February 2026 00:58:56 +0000 (0:00:00.326) 0:00:16.972 ***** 2026-02-28 01:00:54.808568 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.808577 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.808586 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.808594 | orchestrator | 2026-02-28 01:00:54.808603 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-28 01:00:54.808612 | orchestrator | Saturday 28 February 2026 00:58:57 +0000 (0:00:00.584) 0:00:17.556 ***** 2026-02-28 01:00:54.808622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--867868d0--bc68--54b2--8c81--3bd5cfa2d741-osd--block--867868d0--bc68--54b2--8c81--3bd5cfa2d741', 'dm-uuid-LVM-WrHd1WBJwiIQu3wRvwi3oxAdU1uiYw1ssr0IlesLmubdqf3kJezjrYiXv7hinTbv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.808697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee950762--4564--5222--9e83--52313bf46222-osd--block--ee950762--4564--5222--9e83--52313bf46222', 'dm-uuid-LVM-DRo8KROozWdchoWkEV0I4rKCTeGe3CFfLwy1dNIrGyGq95SlnpSl29pQ5dp0XaOO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.808716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.808725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.808735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.808750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.808760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.808799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.808810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.808820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b073c23--7edc--573a--a84d--7267a4d3e426-osd--block--7b073c23--7edc--573a--a84d--7267a4d3e426', 'dm-uuid-LVM-ChISMhrkERnHZXWTu7s4Cf5VESYs0hDb5tiIHlQZ9NK3ixFV4FP9QLT1mPFzXBoD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.808835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.808844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b30b5faa--3070--5965--91f3--7d8dbacf19e9-osd--block--b30b5faa--3070--5965--91f3--7d8dbacf19e9', 'dm-uuid-LVM-MfhbHtjX1HzbaRtp6rlyWUuLSmVUMDv8D7nAKzldfMH2GcPlpwjDnIA26Y3Y2LmK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.808888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part1', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part14', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part15', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part16', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:00:54.808908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.808934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--867868d0--bc68--54b2--8c81--3bd5cfa2d741-osd--block--867868d0--bc68--54b2--8c81--3bd5cfa2d741'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WtB1d6-sWNv-YURM-qg2z-wil5-81PB-JSzAr1', 'scsi-0QEMU_QEMU_HARDDISK_5a576e70-544d-44fd-a16d-0d3a23dfbf81', 'scsi-SQEMU_QEMU_HARDDISK_5a576e70-544d-44fd-a16d-0d3a23dfbf81'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:00:54.808961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ee950762--4564--5222--9e83--52313bf46222-osd--block--ee950762--4564--5222--9e83--52313bf46222'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HybjrJ-CMl1-aoR8-mAan-oGuh-aofN-X0035i', 'scsi-0QEMU_QEMU_HARDDISK_72888dc0-89fa-4d82-a9e9-f7d921f86abf', 'scsi-SQEMU_QEMU_HARDDISK_72888dc0-89fa-4d82-a9e9-f7d921f86abf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:00:54.808975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.808999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a83ed65-2ee8-47d4-9c51-9fbd7e5801de', 'scsi-SQEMU_QEMU_HARDDISK_7a83ed65-2ee8-47d4-9c51-9fbd7e5801de'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:00:54.809016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:00:54.809088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part1', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part14', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part15', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part16', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:00:54.809170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7b073c23--7edc--573a--a84d--7267a4d3e426-osd--block--7b073c23--7edc--573a--a84d--7267a4d3e426'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-19GMjF-P3yp-G5GE-42b5-lyDa-MHK0-ctbrGm', 'scsi-0QEMU_QEMU_HARDDISK_0e1c50ba-f800-4f3f-b273-e42be7614723', 'scsi-SQEMU_QEMU_HARDDISK_0e1c50ba-f800-4f3f-b273-e42be7614723'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:00:54.809186 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.809195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b30b5faa--3070--5965--91f3--7d8dbacf19e9-osd--block--b30b5faa--3070--5965--91f3--7d8dbacf19e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FmiwOs-YAtO-YgEO-v5qO-7EK3-xq1V-dgAbZr', 'scsi-0QEMU_QEMU_HARDDISK_e9cea570-02f9-4492-a688-e95ec43126f4', 'scsi-SQEMU_QEMU_HARDDISK_e9cea570-02f9-4492-a688-e95ec43126f4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:00:54.809205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc966f00-bd76-481b-987a-91131c9d0b5a', 'scsi-SQEMU_QEMU_HARDDISK_dc966f00-bd76-481b-987a-91131c9d0b5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:00:54.809214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:00:54.809224 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.809237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f012bc14--1358--5d7b--888e--596399f0a0b7-osd--block--f012bc14--1358--5d7b--888e--596399f0a0b7', 'dm-uuid-LVM-kce2OSWfgnJq6VvT8pSnf5sYedgDQOSKm1UikoTeCnBPfXdH7wmGnVieltB6N3Ts'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de70aebc--f344--5246--8655--326adc55aaa0-osd--block--de70aebc--f344--5246--8655--326adc55aaa0', 'dm-uuid-LVM-7x6LJedXGNfAgbfF9zeovIMmS7m8AIY1vwDV3zTQmeQ3rXVdMCyFMDJlZcKtQVfD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:00:54.809352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:00:54.809371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f012bc14--1358--5d7b--888e--596399f0a0b7-osd--block--f012bc14--1358--5d7b--888e--596399f0a0b7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iM0Bp3-uSx6-9x09-KOmn-NAd7-OJqA-7Ip2Ie', 'scsi-0QEMU_QEMU_HARDDISK_2e8d751a-8ce2-4e55-9ca5-cbc1ced8bcd0', 'scsi-SQEMU_QEMU_HARDDISK_2e8d751a-8ce2-4e55-9ca5-cbc1ced8bcd0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:00:54.809385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--de70aebc--f344--5246--8655--326adc55aaa0-osd--block--de70aebc--f344--5246--8655--326adc55aaa0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HufsDV-CZn4-olxe-xxSc-cpo2-QLxi-4vdiWp', 'scsi-0QEMU_QEMU_HARDDISK_35a5842a-f5e3-41cd-9ad4-9887af65562b', 'scsi-SQEMU_QEMU_HARDDISK_35a5842a-f5e3-41cd-9ad4-9887af65562b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:00:54.809405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_794d6bd6-cdc9-465f-9345-dcdc45cdec57', 'scsi-SQEMU_QEMU_HARDDISK_794d6bd6-cdc9-465f-9345-dcdc45cdec57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:00:54.809426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:00:54.809447 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.809458 | orchestrator | 2026-02-28 01:00:54.809471 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-28 01:00:54.809484 | orchestrator | Saturday 28 February 2026 00:58:58 +0000 (0:00:00.633) 0:00:18.190 ***** 2026-02-28 01:00:54.809498 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--867868d0--bc68--54b2--8c81--3bd5cfa2d741-osd--block--867868d0--bc68--54b2--8c81--3bd5cfa2d741', 'dm-uuid-LVM-WrHd1WBJwiIQu3wRvwi3oxAdU1uiYw1ssr0IlesLmubdqf3kJezjrYiXv7hinTbv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809512 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee950762--4564--5222--9e83--52313bf46222-osd--block--ee950762--4564--5222--9e83--52313bf46222', 'dm-uuid-LVM-DRo8KROozWdchoWkEV0I4rKCTeGe3CFfLwy1dNIrGyGq95SlnpSl29pQ5dp0XaOO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809540 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809553 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809587 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809602 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b073c23--7edc--573a--a84d--7267a4d3e426-osd--block--7b073c23--7edc--573a--a84d--7267a4d3e426', 'dm-uuid-LVM-ChISMhrkERnHZXWTu7s4Cf5VESYs0hDb5tiIHlQZ9NK3ixFV4FP9QLT1mPFzXBoD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809615 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809694 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b30b5faa--3070--5965--91f3--7d8dbacf19e9-osd--block--b30b5faa--3070--5965--91f3--7d8dbacf19e9', 'dm-uuid-LVM-MfhbHtjX1HzbaRtp6rlyWUuLSmVUMDv8D7nAKzldfMH2GcPlpwjDnIA26Y3Y2LmK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809717 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809731 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809764 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809779 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809793 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809808 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809822 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809854 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part1', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part14', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part15', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part16', 'scsi-SQEMU_QEMU_HARDDISK_b20a930b-70d1-42c7-a265-d4a23b5b0ea5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809880 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809894 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--867868d0--bc68--54b2--8c81--3bd5cfa2d741-osd--block--867868d0--bc68--54b2--8c81--3bd5cfa2d741'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WtB1d6-sWNv-YURM-qg2z-wil5-81PB-JSzAr1', 'scsi-0QEMU_QEMU_HARDDISK_5a576e70-544d-44fd-a16d-0d3a23dfbf81', 'scsi-SQEMU_QEMU_HARDDISK_5a576e70-544d-44fd-a16d-0d3a23dfbf81'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809907 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809927 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ee950762--4564--5222--9e83--52313bf46222-osd--block--ee950762--4564--5222--9e83--52313bf46222'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HybjrJ-CMl1-aoR8-mAan-oGuh-aofN-X0035i', 'scsi-0QEMU_QEMU_HARDDISK_72888dc0-89fa-4d82-a9e9-f7d921f86abf', 'scsi-SQEMU_QEMU_HARDDISK_72888dc0-89fa-4d82-a9e9-f7d921f86abf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809962 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809973 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a83ed65-2ee8-47d4-9c51-9fbd7e5801de', 'scsi-SQEMU_QEMU_HARDDISK_7a83ed65-2ee8-47d4-9c51-9fbd7e5801de'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809981 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809991 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.809999 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.810093 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part1', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part14', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part15', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part16', 'scsi-SQEMU_QEMU_HARDDISK_c1f8a38c-6103-4a77-9722-35142b367f20-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810116 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7b073c23--7edc--573a--a84d--7267a4d3e426-osd--block--7b073c23--7edc--573a--a84d--7267a4d3e426'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-19GMjF-P3yp-G5GE-42b5-lyDa-MHK0-ctbrGm', 'scsi-0QEMU_QEMU_HARDDISK_0e1c50ba-f800-4f3f-b273-e42be7614723', 'scsi-SQEMU_QEMU_HARDDISK_0e1c50ba-f800-4f3f-b273-e42be7614723'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810126 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b30b5faa--3070--5965--91f3--7d8dbacf19e9-osd--block--b30b5faa--3070--5965--91f3--7d8dbacf19e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FmiwOs-YAtO-YgEO-v5qO-7EK3-xq1V-dgAbZr', 'scsi-0QEMU_QEMU_HARDDISK_e9cea570-02f9-4492-a688-e95ec43126f4', 'scsi-SQEMU_QEMU_HARDDISK_e9cea570-02f9-4492-a688-e95ec43126f4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810139 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc966f00-bd76-481b-987a-91131c9d0b5a', 'scsi-SQEMU_QEMU_HARDDISK_dc966f00-bd76-481b-987a-91131c9d0b5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810162 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810171 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f012bc14--1358--5d7b--888e--596399f0a0b7-osd--block--f012bc14--1358--5d7b--888e--596399f0a0b7', 'dm-uuid-LVM-kce2OSWfgnJq6VvT8pSnf5sYedgDQOSKm1UikoTeCnBPfXdH7wmGnVieltB6N3Ts'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810179 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.810187 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de70aebc--f344--5246--8655--326adc55aaa0-osd--block--de70aebc--f344--5246--8655--326adc55aaa0', 'dm-uuid-LVM-7x6LJedXGNfAgbfF9zeovIMmS7m8AIY1vwDV3zTQmeQ3rXVdMCyFMDJlZcKtQVfD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810196 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810209 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810224 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810243 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810257 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810268 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810286 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810305 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810345 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b2412f3-1166-49bc-8fd3-bb5cc54eb3cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810363 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f012bc14--1358--5d7b--888e--596399f0a0b7-osd--block--f012bc14--1358--5d7b--888e--596399f0a0b7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iM0Bp3-uSx6-9x09-KOmn-NAd7-OJqA-7Ip2Ie', 'scsi-0QEMU_QEMU_HARDDISK_2e8d751a-8ce2-4e55-9ca5-cbc1ced8bcd0', 'scsi-SQEMU_QEMU_HARDDISK_2e8d751a-8ce2-4e55-9ca5-cbc1ced8bcd0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810378 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--de70aebc--f344--5246--8655--326adc55aaa0-osd--block--de70aebc--f344--5246--8655--326adc55aaa0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HufsDV-CZn4-olxe-xxSc-cpo2-QLxi-4vdiWp', 'scsi-0QEMU_QEMU_HARDDISK_35a5842a-f5e3-41cd-9ad4-9887af65562b', 'scsi-SQEMU_QEMU_HARDDISK_35a5842a-f5e3-41cd-9ad4-9887af65562b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810406 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_794d6bd6-cdc9-465f-9345-dcdc45cdec57', 'scsi-SQEMU_QEMU_HARDDISK_794d6bd6-cdc9-465f-9345-dcdc45cdec57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810427 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:00:54.810443 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.810456 | orchestrator | 2026-02-28 01:00:54.810469 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-28 01:00:54.810483 | orchestrator | Saturday 28 February 2026 00:58:58 +0000 (0:00:00.686) 0:00:18.877 ***** 2026-02-28 01:00:54.810495 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:00:54.810507 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:00:54.810521 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:00:54.810534 | orchestrator | 2026-02-28 01:00:54.810547 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-28 01:00:54.810561 | orchestrator | Saturday 28 February 2026 00:58:59 +0000 (0:00:00.729) 0:00:19.606 ***** 2026-02-28 01:00:54.810574 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:00:54.810582 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:00:54.810590 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:00:54.810599 | orchestrator | 2026-02-28 01:00:54.810612 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-28 01:00:54.810657 | orchestrator | Saturday 28 February 2026 00:58:59 +0000 (0:00:00.529) 0:00:20.136 ***** 2026-02-28 01:00:54.810673 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:00:54.810685 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:00:54.810698 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:00:54.810709 | orchestrator | 2026-02-28 01:00:54.810720 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-28 01:00:54.810731 | orchestrator | Saturday 28 February 2026 00:59:00 +0000 (0:00:00.763) 0:00:20.899 ***** 2026-02-28 01:00:54.810744 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.810757 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.810770 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.810784 | orchestrator | 2026-02-28 01:00:54.810797 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-28 01:00:54.810810 | orchestrator | Saturday 28 February 2026 00:59:01 +0000 (0:00:00.313) 0:00:21.213 ***** 2026-02-28 01:00:54.810825 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.810835 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.810852 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.810860 | orchestrator | 2026-02-28 01:00:54.810868 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-28 01:00:54.810876 | orchestrator | Saturday 28 February 2026 00:59:01 +0000 (0:00:00.437) 0:00:21.650 ***** 2026-02-28 01:00:54.810884 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.810892 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.810900 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.810908 | orchestrator | 2026-02-28 01:00:54.810916 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-28 01:00:54.810924 | orchestrator | Saturday 28 February 2026 00:59:02 +0000 (0:00:00.579) 0:00:22.230 ***** 2026-02-28 01:00:54.810932 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-28 01:00:54.810941 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-28 01:00:54.810949 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-28 01:00:54.810956 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-28 01:00:54.810964 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-28 01:00:54.810972 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-28 01:00:54.810980 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-28 01:00:54.810988 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-28 01:00:54.810996 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-28 01:00:54.811004 | orchestrator | 2026-02-28 01:00:54.811012 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-28 01:00:54.811019 | orchestrator | Saturday 28 February 2026 00:59:03 +0000 (0:00:01.093) 0:00:23.323 ***** 2026-02-28 01:00:54.811027 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-28 01:00:54.811037 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-28 01:00:54.811050 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-28 01:00:54.811063 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.811084 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-28 01:00:54.811097 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-28 01:00:54.811111 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-28 01:00:54.811123 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.811137 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-28 01:00:54.811149 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-28 01:00:54.811162 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-28 01:00:54.811176 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.811189 | orchestrator | 2026-02-28 01:00:54.811201 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-28 01:00:54.811215 | orchestrator | Saturday 28 February 2026 00:59:03 +0000 (0:00:00.443) 0:00:23.766 ***** 2026-02-28 01:00:54.811229 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:00:54.811242 | orchestrator | 2026-02-28 01:00:54.811255 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-28 01:00:54.811271 | orchestrator | Saturday 28 February 2026 00:59:04 +0000 (0:00:00.825) 0:00:24.592 ***** 2026-02-28 01:00:54.811294 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.811307 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.811315 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.811323 | orchestrator | 2026-02-28 01:00:54.811331 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-28 01:00:54.811339 | orchestrator | Saturday 28 February 2026 00:59:04 +0000 (0:00:00.359) 0:00:24.952 ***** 2026-02-28 01:00:54.811347 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.811363 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.811371 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.811379 | orchestrator | 2026-02-28 01:00:54.811387 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-28 01:00:54.811412 | orchestrator | Saturday 28 February 2026 00:59:05 +0000 (0:00:00.321) 0:00:25.274 ***** 2026-02-28 01:00:54.811421 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.811429 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.811437 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:00:54.811445 | orchestrator | 2026-02-28 01:00:54.811453 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-28 01:00:54.811461 | orchestrator | Saturday 28 February 2026 00:59:05 +0000 (0:00:00.387) 0:00:25.661 ***** 2026-02-28 01:00:54.811469 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:00:54.811477 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:00:54.811485 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:00:54.811492 | orchestrator | 2026-02-28 01:00:54.811500 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-28 01:00:54.811508 | orchestrator | Saturday 28 February 2026 00:59:06 +0000 (0:00:00.998) 0:00:26.660 ***** 2026-02-28 01:00:54.811516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 01:00:54.811524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 01:00:54.811532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 01:00:54.811539 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.811553 | orchestrator | 2026-02-28 01:00:54.811567 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-28 01:00:54.811580 | orchestrator | Saturday 28 February 2026 00:59:06 +0000 (0:00:00.405) 0:00:27.065 ***** 2026-02-28 01:00:54.811593 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 01:00:54.811605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 01:00:54.811618 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 01:00:54.811653 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.811666 | orchestrator | 2026-02-28 01:00:54.811679 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-28 01:00:54.811691 | orchestrator | Saturday 28 February 2026 00:59:07 +0000 (0:00:00.447) 0:00:27.513 ***** 2026-02-28 01:00:54.811703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 01:00:54.811715 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 01:00:54.811727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 01:00:54.811740 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.811752 | orchestrator | 2026-02-28 01:00:54.811764 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-28 01:00:54.811777 | orchestrator | Saturday 28 February 2026 00:59:07 +0000 (0:00:00.511) 0:00:28.024 ***** 2026-02-28 01:00:54.811792 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:00:54.811805 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:00:54.811818 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:00:54.811831 | orchestrator | 2026-02-28 01:00:54.811845 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-28 01:00:54.811858 | orchestrator | Saturday 28 February 2026 00:59:08 +0000 (0:00:00.358) 0:00:28.383 ***** 2026-02-28 01:00:54.811872 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-28 01:00:54.811885 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-28 01:00:54.811898 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-28 01:00:54.811913 | orchestrator | 2026-02-28 01:00:54.811926 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-28 01:00:54.811939 | orchestrator | Saturday 28 February 2026 00:59:08 +0000 (0:00:00.556) 0:00:28.939 ***** 2026-02-28 01:00:54.811952 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 01:00:54.811987 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 01:00:54.812016 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 01:00:54.812030 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-28 01:00:54.812057 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-28 01:00:54.812070 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-28 01:00:54.812083 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-28 01:00:54.812096 | orchestrator | 2026-02-28 01:00:54.812108 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-28 01:00:54.812120 | orchestrator | Saturday 28 February 2026 00:59:09 +0000 (0:00:01.192) 0:00:30.132 ***** 2026-02-28 01:00:54.812133 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 01:00:54.812145 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 01:00:54.812157 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 01:00:54.812170 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-28 01:00:54.812183 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-28 01:00:54.812197 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-28 01:00:54.812222 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-28 01:00:54.812237 | orchestrator | 2026-02-28 01:00:54.812250 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-28 01:00:54.812261 | orchestrator | Saturday 28 February 2026 00:59:12 +0000 (0:00:02.251) 0:00:32.384 ***** 2026-02-28 01:00:54.812274 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:00:54.812287 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:00:54.812300 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-28 01:00:54.812314 | orchestrator | 2026-02-28 01:00:54.812328 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-28 01:00:54.812343 | orchestrator | Saturday 28 February 2026 00:59:12 +0000 (0:00:00.545) 0:00:32.929 ***** 2026-02-28 01:00:54.812359 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 01:00:54.812375 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 01:00:54.812389 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 01:00:54.812402 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 01:00:54.812416 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 01:00:54.812428 | orchestrator | 2026-02-28 01:00:54.812454 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-28 01:00:54.812467 | orchestrator | Saturday 28 February 2026 00:59:57 +0000 (0:00:44.264) 0:01:17.194 ***** 2026-02-28 01:00:54.812481 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.812494 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.812507 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.812521 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.812533 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.812547 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.812561 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-28 01:00:54.812574 | orchestrator | 2026-02-28 01:00:54.812587 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-28 01:00:54.812600 | orchestrator | Saturday 28 February 2026 01:00:22 +0000 (0:00:25.143) 0:01:42.338 ***** 2026-02-28 01:00:54.812614 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.812818 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.812861 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.812869 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.812877 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.812885 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.812894 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 01:00:54.812902 | orchestrator | 2026-02-28 01:00:54.812910 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-28 01:00:54.812926 | orchestrator | Saturday 28 February 2026 01:00:34 +0000 (0:00:12.756) 0:01:55.094 ***** 2026-02-28 01:00:54.812934 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.812942 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:00:54.812950 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:00:54.812958 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.812966 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:00:54.812985 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:00:54.812993 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.813000 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:00:54.813006 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:00:54.813013 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.813020 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:00:54.813027 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:00:54.813033 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.813040 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:00:54.813047 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:00:54.813053 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:00:54.813068 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:00:54.813075 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:00:54.813082 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-28 01:00:54.813089 | orchestrator | 2026-02-28 01:00:54.813096 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:00:54.813103 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-28 01:00:54.813111 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-28 01:00:54.813119 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-28 01:00:54.813126 | orchestrator | 2026-02-28 01:00:54.813133 | orchestrator | 2026-02-28 01:00:54.813140 | orchestrator | 2026-02-28 01:00:54.813146 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:00:54.813153 | orchestrator | Saturday 28 February 2026 01:00:53 +0000 (0:00:18.880) 0:02:13.975 ***** 2026-02-28 01:00:54.813160 | orchestrator | =============================================================================== 2026-02-28 01:00:54.813167 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.26s 2026-02-28 01:00:54.813173 | orchestrator | generate keys ---------------------------------------------------------- 25.14s 2026-02-28 01:00:54.813180 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.88s 2026-02-28 01:00:54.813187 | orchestrator | get keys from monitors ------------------------------------------------- 12.76s 2026-02-28 01:00:54.813193 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.32s 2026-02-28 01:00:54.813200 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.25s 2026-02-28 01:00:54.813207 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.66s 2026-02-28 01:00:54.813214 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.19s 2026-02-28 01:00:54.813220 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.09s 2026-02-28 01:00:54.813227 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 1.00s 2026-02-28 01:00:54.813234 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.91s 2026-02-28 01:00:54.813240 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.88s 2026-02-28 01:00:54.813247 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.83s 2026-02-28 01:00:54.813254 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.76s 2026-02-28 01:00:54.813261 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.76s 2026-02-28 01:00:54.813273 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.73s 2026-02-28 01:00:54.813280 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.69s 2026-02-28 01:00:54.813287 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.69s 2026-02-28 01:00:54.813294 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2026-02-28 01:00:54.813300 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.66s 2026-02-28 01:00:54.813307 | orchestrator | 2026-02-28 01:00:54 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:54.813314 | orchestrator | 2026-02-28 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:57.868206 | orchestrator | 2026-02-28 01:00:57 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:00:57.870919 | orchestrator | 2026-02-28 01:00:57 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:00:57.872420 | orchestrator | 2026-02-28 01:00:57 | INFO  | Task 69a94cb4-2054-4292-b680-ee775cad5eaa is in state STARTED 2026-02-28 01:00:57.872495 | orchestrator | 2026-02-28 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:00.927313 | orchestrator | 2026-02-28 01:01:00 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:00.930369 | orchestrator | 2026-02-28 01:01:00 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:01:00.931995 | orchestrator | 2026-02-28 01:01:00 | INFO  | Task 69a94cb4-2054-4292-b680-ee775cad5eaa is in state STARTED 2026-02-28 01:01:00.932045 | orchestrator | 2026-02-28 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:03.982013 | orchestrator | 2026-02-28 01:01:03 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:03.985888 | orchestrator | 2026-02-28 01:01:03 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:01:03.987966 | orchestrator | 2026-02-28 01:01:03 | INFO  | Task 69a94cb4-2054-4292-b680-ee775cad5eaa is in state STARTED 2026-02-28 01:01:03.988116 | orchestrator | 2026-02-28 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:07.043865 | orchestrator | 2026-02-28 01:01:07 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:07.044622 | orchestrator | 2026-02-28 01:01:07 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:01:07.046261 | orchestrator | 2026-02-28 01:01:07 | INFO  | Task 69a94cb4-2054-4292-b680-ee775cad5eaa is in state STARTED 2026-02-28 01:01:07.046515 | orchestrator | 2026-02-28 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:10.098164 | orchestrator | 2026-02-28 01:01:10 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:10.100557 | orchestrator | 2026-02-28 01:01:10 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:01:10.102918 | orchestrator | 2026-02-28 01:01:10 | INFO  | Task 69a94cb4-2054-4292-b680-ee775cad5eaa is in state STARTED 2026-02-28 01:01:10.103758 | orchestrator | 2026-02-28 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:13.149129 | orchestrator | 2026-02-28 01:01:13 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:13.154239 | orchestrator | 2026-02-28 01:01:13 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:01:13.157785 | orchestrator | 2026-02-28 01:01:13 | INFO  | Task 69a94cb4-2054-4292-b680-ee775cad5eaa is in state STARTED 2026-02-28 01:01:13.158354 | orchestrator | 2026-02-28 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:16.209938 | orchestrator | 2026-02-28 01:01:16 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:16.213005 | orchestrator | 2026-02-28 01:01:16 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:01:16.215016 | orchestrator | 2026-02-28 01:01:16 | INFO  | Task 69a94cb4-2054-4292-b680-ee775cad5eaa is in state STARTED 2026-02-28 01:01:16.215523 | orchestrator | 2026-02-28 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:19.272398 | orchestrator | 2026-02-28 01:01:19 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:19.275257 | orchestrator | 2026-02-28 01:01:19 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:01:19.277065 | orchestrator | 2026-02-28 01:01:19 | INFO  | Task 69a94cb4-2054-4292-b680-ee775cad5eaa is in state STARTED 2026-02-28 01:01:19.277115 | orchestrator | 2026-02-28 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:22.335813 | orchestrator | 2026-02-28 01:01:22 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:22.337292 | orchestrator | 2026-02-28 01:01:22 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:01:22.340145 | orchestrator | 2026-02-28 01:01:22 | INFO  | Task 69a94cb4-2054-4292-b680-ee775cad5eaa is in state STARTED 2026-02-28 01:01:22.340763 | orchestrator | 2026-02-28 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:25.385766 | orchestrator | 2026-02-28 01:01:25 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:25.388028 | orchestrator | 2026-02-28 01:01:25 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:01:25.393382 | orchestrator | 2026-02-28 01:01:25 | INFO  | Task 69a94cb4-2054-4292-b680-ee775cad5eaa is in state STARTED 2026-02-28 01:01:25.393436 | orchestrator | 2026-02-28 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:28.434183 | orchestrator | 2026-02-28 01:01:28 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:28.434283 | orchestrator | 2026-02-28 01:01:28 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:01:28.434942 | orchestrator | 2026-02-28 01:01:28 | INFO  | Task 69a94cb4-2054-4292-b680-ee775cad5eaa is in state STARTED 2026-02-28 01:01:28.436499 | orchestrator | 2026-02-28 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:31.481131 | orchestrator | 2026-02-28 01:01:31 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:31.481721 | orchestrator | 2026-02-28 01:01:31 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:01:31.482519 | orchestrator | 2026-02-28 01:01:31 | INFO  | Task 69a94cb4-2054-4292-b680-ee775cad5eaa is in state STARTED 2026-02-28 01:01:31.482544 | orchestrator | 2026-02-28 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:34.541845 | orchestrator | 2026-02-28 01:01:34 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:34.543561 | orchestrator | 2026-02-28 01:01:34 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state STARTED 2026-02-28 01:01:34.545540 | orchestrator | 2026-02-28 01:01:34 | INFO  | Task 69a94cb4-2054-4292-b680-ee775cad5eaa is in state STARTED 2026-02-28 01:01:34.545574 | orchestrator | 2026-02-28 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:37.597850 | orchestrator | 2026-02-28 01:01:37 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:37.599547 | orchestrator | 2026-02-28 01:01:37 | INFO  | Task 95313494-b67e-41d7-b912-483b314c7331 is in state SUCCESS 2026-02-28 01:01:37.602320 | orchestrator | 2026-02-28 01:01:37.602425 | orchestrator | 2026-02-28 01:01:37.602452 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:01:37.602473 | orchestrator | 2026-02-28 01:01:37.602493 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:01:37.602505 | orchestrator | Saturday 28 February 2026 00:59:48 +0000 (0:00:00.278) 0:00:00.278 ***** 2026-02-28 01:01:37.602517 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:37.602537 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:37.602556 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:37.602575 | orchestrator | 2026-02-28 01:01:37.602595 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:01:37.602747 | orchestrator | Saturday 28 February 2026 00:59:49 +0000 (0:00:00.388) 0:00:00.667 ***** 2026-02-28 01:01:37.602766 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-28 01:01:37.602777 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-28 01:01:37.602788 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-28 01:01:37.602799 | orchestrator | 2026-02-28 01:01:37.602810 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-28 01:01:37.602821 | orchestrator | 2026-02-28 01:01:37.602832 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-28 01:01:37.602843 | orchestrator | Saturday 28 February 2026 00:59:49 +0000 (0:00:00.451) 0:00:01.118 ***** 2026-02-28 01:01:37.602854 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:01:37.602866 | orchestrator | 2026-02-28 01:01:37.602878 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-28 01:01:37.602888 | orchestrator | Saturday 28 February 2026 00:59:50 +0000 (0:00:00.554) 0:00:01.672 ***** 2026-02-28 01:01:37.602923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:37.602968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:37.602993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:37.603005 | orchestrator | 2026-02-28 01:01:37.603017 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-28 01:01:37.603029 | orchestrator | Saturday 28 February 2026 00:59:51 +0000 (0:00:01.200) 0:00:02.872 ***** 2026-02-28 01:01:37.603040 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:37.603051 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:37.603069 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:37.603089 | orchestrator | 2026-02-28 01:01:37.603110 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-28 01:01:37.603133 | orchestrator | Saturday 28 February 2026 00:59:51 +0000 (0:00:00.483) 0:00:03.356 ***** 2026-02-28 01:01:37.603166 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-28 01:01:37.603186 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-28 01:01:37.603206 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-28 01:01:37.603225 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-28 01:01:37.603245 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-28 01:01:37.603264 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-28 01:01:37.603283 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-28 01:01:37.603302 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-28 01:01:37.603321 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-28 01:01:37.603342 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-28 01:01:37.603362 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-28 01:01:37.603379 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-28 01:01:37.603390 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-28 01:01:37.603401 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-28 01:01:37.603411 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-28 01:01:37.603422 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-28 01:01:37.603440 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-28 01:01:37.603451 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-28 01:01:37.603462 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-28 01:01:37.603473 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-28 01:01:37.603483 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-28 01:01:37.603494 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-28 01:01:37.603505 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-28 01:01:37.603515 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-28 01:01:37.603527 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-28 01:01:37.603540 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-28 01:01:37.603551 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-28 01:01:37.603562 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-28 01:01:37.603572 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-28 01:01:37.603583 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-28 01:01:37.603603 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-28 01:01:37.603614 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-28 01:01:37.603625 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-28 01:01:37.603674 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-28 01:01:37.603687 | orchestrator | 2026-02-28 01:01:37.603698 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:37.603709 | orchestrator | Saturday 28 February 2026 00:59:52 +0000 (0:00:00.845) 0:00:04.202 ***** 2026-02-28 01:01:37.603720 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:37.603731 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:37.603741 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:37.603752 | orchestrator | 2026-02-28 01:01:37.603763 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:37.603774 | orchestrator | Saturday 28 February 2026 00:59:53 +0000 (0:00:00.317) 0:00:04.520 ***** 2026-02-28 01:01:37.603793 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.603806 | orchestrator | 2026-02-28 01:01:37.603816 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:37.603828 | orchestrator | Saturday 28 February 2026 00:59:53 +0000 (0:00:00.174) 0:00:04.694 ***** 2026-02-28 01:01:37.603839 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.603850 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:37.603861 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:37.603872 | orchestrator | 2026-02-28 01:01:37.603883 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:37.603893 | orchestrator | Saturday 28 February 2026 00:59:53 +0000 (0:00:00.540) 0:00:05.235 ***** 2026-02-28 01:01:37.603904 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:37.603915 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:37.603926 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:37.603937 | orchestrator | 2026-02-28 01:01:37.603948 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:37.603959 | orchestrator | Saturday 28 February 2026 00:59:54 +0000 (0:00:00.366) 0:00:05.602 ***** 2026-02-28 01:01:37.603970 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.603981 | orchestrator | 2026-02-28 01:01:37.603991 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:37.604002 | orchestrator | Saturday 28 February 2026 00:59:54 +0000 (0:00:00.150) 0:00:05.752 ***** 2026-02-28 01:01:37.604013 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.604024 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:37.604035 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:37.604046 | orchestrator | 2026-02-28 01:01:37.604057 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:37.604068 | orchestrator | Saturday 28 February 2026 00:59:54 +0000 (0:00:00.336) 0:00:06.088 ***** 2026-02-28 01:01:37.604078 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:37.604089 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:37.604100 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:37.604111 | orchestrator | 2026-02-28 01:01:37.604122 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:37.604138 | orchestrator | Saturday 28 February 2026 00:59:55 +0000 (0:00:00.398) 0:00:06.487 ***** 2026-02-28 01:01:37.604149 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.604169 | orchestrator | 2026-02-28 01:01:37.604180 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:37.604191 | orchestrator | Saturday 28 February 2026 00:59:55 +0000 (0:00:00.363) 0:00:06.850 ***** 2026-02-28 01:01:37.604202 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.604213 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:37.604224 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:37.604235 | orchestrator | 2026-02-28 01:01:37.604246 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:37.604257 | orchestrator | Saturday 28 February 2026 00:59:55 +0000 (0:00:00.335) 0:00:07.186 ***** 2026-02-28 01:01:37.604267 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:37.604278 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:37.604289 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:37.604300 | orchestrator | 2026-02-28 01:01:37.604311 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:37.604322 | orchestrator | Saturday 28 February 2026 00:59:56 +0000 (0:00:00.370) 0:00:07.557 ***** 2026-02-28 01:01:37.604333 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.604344 | orchestrator | 2026-02-28 01:01:37.604355 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:37.604366 | orchestrator | Saturday 28 February 2026 00:59:56 +0000 (0:00:00.137) 0:00:07.695 ***** 2026-02-28 01:01:37.604377 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.604388 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:37.604399 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:37.604409 | orchestrator | 2026-02-28 01:01:37.604421 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:37.604432 | orchestrator | Saturday 28 February 2026 00:59:56 +0000 (0:00:00.343) 0:00:08.038 ***** 2026-02-28 01:01:37.604442 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:37.604453 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:37.604464 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:37.604475 | orchestrator | 2026-02-28 01:01:37.604486 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:37.604497 | orchestrator | Saturday 28 February 2026 00:59:57 +0000 (0:00:00.556) 0:00:08.595 ***** 2026-02-28 01:01:37.604508 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.604518 | orchestrator | 2026-02-28 01:01:37.604529 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:37.604540 | orchestrator | Saturday 28 February 2026 00:59:57 +0000 (0:00:00.148) 0:00:08.744 ***** 2026-02-28 01:01:37.604551 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.604562 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:37.604572 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:37.604583 | orchestrator | 2026-02-28 01:01:37.604594 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:37.604605 | orchestrator | Saturday 28 February 2026 00:59:57 +0000 (0:00:00.326) 0:00:09.071 ***** 2026-02-28 01:01:37.604616 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:37.604627 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:37.604663 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:37.604674 | orchestrator | 2026-02-28 01:01:37.604685 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:37.604696 | orchestrator | Saturday 28 February 2026 00:59:58 +0000 (0:00:00.340) 0:00:09.411 ***** 2026-02-28 01:01:37.604706 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.604717 | orchestrator | 2026-02-28 01:01:37.604728 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:37.604739 | orchestrator | Saturday 28 February 2026 00:59:58 +0000 (0:00:00.139) 0:00:09.551 ***** 2026-02-28 01:01:37.604750 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.604761 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:37.604771 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:37.604782 | orchestrator | 2026-02-28 01:01:37.604794 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:37.604819 | orchestrator | Saturday 28 February 2026 00:59:58 +0000 (0:00:00.311) 0:00:09.863 ***** 2026-02-28 01:01:37.604831 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:37.604842 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:37.604853 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:37.604864 | orchestrator | 2026-02-28 01:01:37.604875 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:37.604886 | orchestrator | Saturday 28 February 2026 00:59:59 +0000 (0:00:00.609) 0:00:10.472 ***** 2026-02-28 01:01:37.604897 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.604908 | orchestrator | 2026-02-28 01:01:37.604919 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:37.604929 | orchestrator | Saturday 28 February 2026 00:59:59 +0000 (0:00:00.158) 0:00:10.631 ***** 2026-02-28 01:01:37.604940 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.604951 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:37.604962 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:37.604973 | orchestrator | 2026-02-28 01:01:37.604984 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:37.604995 | orchestrator | Saturday 28 February 2026 00:59:59 +0000 (0:00:00.302) 0:00:10.934 ***** 2026-02-28 01:01:37.605005 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:37.605016 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:37.605027 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:37.605038 | orchestrator | 2026-02-28 01:01:37.605049 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:37.605059 | orchestrator | Saturday 28 February 2026 00:59:59 +0000 (0:00:00.361) 0:00:11.296 ***** 2026-02-28 01:01:37.605070 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.605081 | orchestrator | 2026-02-28 01:01:37.605092 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:37.605103 | orchestrator | Saturday 28 February 2026 01:00:00 +0000 (0:00:00.205) 0:00:11.502 ***** 2026-02-28 01:01:37.605113 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.605124 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:37.605135 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:37.605146 | orchestrator | 2026-02-28 01:01:37.605162 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:37.605173 | orchestrator | Saturday 28 February 2026 01:00:00 +0000 (0:00:00.687) 0:00:12.189 ***** 2026-02-28 01:01:37.605184 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:37.605195 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:37.605206 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:37.605217 | orchestrator | 2026-02-28 01:01:37.605227 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:37.605238 | orchestrator | Saturday 28 February 2026 01:00:01 +0000 (0:00:00.351) 0:00:12.540 ***** 2026-02-28 01:01:37.605249 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.605260 | orchestrator | 2026-02-28 01:01:37.605271 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:37.605282 | orchestrator | Saturday 28 February 2026 01:00:01 +0000 (0:00:00.188) 0:00:12.729 ***** 2026-02-28 01:01:37.605292 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.605303 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:37.605314 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:37.605325 | orchestrator | 2026-02-28 01:01:37.605335 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:37.605346 | orchestrator | Saturday 28 February 2026 01:00:01 +0000 (0:00:00.345) 0:00:13.074 ***** 2026-02-28 01:01:37.605357 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:37.605368 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:37.605379 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:37.605390 | orchestrator | 2026-02-28 01:01:37.605401 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:37.605419 | orchestrator | Saturday 28 February 2026 01:00:02 +0000 (0:00:00.346) 0:00:13.421 ***** 2026-02-28 01:01:37.605430 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.605441 | orchestrator | 2026-02-28 01:01:37.605452 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:37.605463 | orchestrator | Saturday 28 February 2026 01:00:02 +0000 (0:00:00.133) 0:00:13.554 ***** 2026-02-28 01:01:37.605474 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.605485 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:37.605496 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:37.605507 | orchestrator | 2026-02-28 01:01:37.605517 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-28 01:01:37.605528 | orchestrator | Saturday 28 February 2026 01:00:02 +0000 (0:00:00.587) 0:00:14.141 ***** 2026-02-28 01:01:37.605539 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:01:37.605550 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:37.605560 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:01:37.605571 | orchestrator | 2026-02-28 01:01:37.605582 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-28 01:01:37.605593 | orchestrator | Saturday 28 February 2026 01:00:04 +0000 (0:00:01.745) 0:00:15.887 ***** 2026-02-28 01:01:37.605604 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-28 01:01:37.605615 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-28 01:01:37.605626 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-28 01:01:37.605661 | orchestrator | 2026-02-28 01:01:37.605672 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-28 01:01:37.605684 | orchestrator | Saturday 28 February 2026 01:00:06 +0000 (0:00:02.179) 0:00:18.066 ***** 2026-02-28 01:01:37.605695 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-28 01:01:37.605706 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-28 01:01:37.605717 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-28 01:01:37.605728 | orchestrator | 2026-02-28 01:01:37.605746 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-28 01:01:37.605758 | orchestrator | Saturday 28 February 2026 01:00:09 +0000 (0:00:03.104) 0:00:21.171 ***** 2026-02-28 01:01:37.605769 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-28 01:01:37.605780 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-28 01:01:37.605791 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-28 01:01:37.605802 | orchestrator | 2026-02-28 01:01:37.605812 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-28 01:01:37.605824 | orchestrator | Saturday 28 February 2026 01:00:11 +0000 (0:00:02.184) 0:00:23.356 ***** 2026-02-28 01:01:37.605835 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.605845 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:37.605856 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:37.605868 | orchestrator | 2026-02-28 01:01:37.605878 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-28 01:01:37.605889 | orchestrator | Saturday 28 February 2026 01:00:12 +0000 (0:00:00.338) 0:00:23.695 ***** 2026-02-28 01:01:37.605900 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.605911 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:37.605922 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:37.605933 | orchestrator | 2026-02-28 01:01:37.605944 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-28 01:01:37.605964 | orchestrator | Saturday 28 February 2026 01:00:12 +0000 (0:00:00.298) 0:00:23.993 ***** 2026-02-28 01:01:37.605975 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:01:37.605986 | orchestrator | 2026-02-28 01:01:37.605997 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-28 01:01:37.606013 | orchestrator | Saturday 28 February 2026 01:00:13 +0000 (0:00:00.852) 0:00:24.846 ***** 2026-02-28 01:01:37.606081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:37.606121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:37.606142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:37.606155 | orchestrator | 2026-02-28 01:01:37.606166 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-28 01:01:37.606177 | orchestrator | Saturday 28 February 2026 01:00:15 +0000 (0:00:01.608) 0:00:26.455 ***** 2026-02-28 01:01:37.606204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:01:37.606224 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.606243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:01:37.606256 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:37.606275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:01:37.606293 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:37.606304 | orchestrator | 2026-02-28 01:01:37.606315 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-28 01:01:37.606327 | orchestrator | Saturday 28 February 2026 01:00:15 +0000 (0:00:00.733) 0:00:27.188 ***** 2026-02-28 01:01:37.606346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:01:37.606369 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.606387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:01:37.606400 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:37.606419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:01:37.606439 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:37.606450 | orchestrator | 2026-02-28 01:01:37.606461 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-28 01:01:37.606472 | orchestrator | Saturday 28 February 2026 01:00:16 +0000 (0:00:00.853) 0:00:28.042 ***** 2026-02-28 01:01:37.606489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:37.606516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:37.606536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:37.606548 | orchestrator | 2026-02-28 01:01:37.606560 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-28 01:01:37.606571 | orchestrator | Saturday 28 February 2026 01:00:18 +0000 (0:00:01.692) 0:00:29.735 ***** 2026-02-28 01:01:37.606582 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:37.606593 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:37.606604 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:37.606615 | orchestrator | 2026-02-28 01:01:37.606626 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-28 01:01:37.606693 | orchestrator | Saturday 28 February 2026 01:00:18 +0000 (0:00:00.316) 0:00:30.051 ***** 2026-02-28 01:01:37.606705 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:01:37.606716 | orchestrator | 2026-02-28 01:01:37.606727 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-28 01:01:37.606738 | orchestrator | Saturday 28 February 2026 01:00:19 +0000 (0:00:00.599) 0:00:30.650 ***** 2026-02-28 01:01:37.606749 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:37.606760 | orchestrator | 2026-02-28 01:01:37.606771 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-28 01:01:37.606782 | orchestrator | Saturday 28 February 2026 01:00:21 +0000 (0:00:02.669) 0:00:33.320 ***** 2026-02-28 01:01:37.606793 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:37.606803 | orchestrator | 2026-02-28 01:01:37.606820 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-28 01:01:37.606839 | orchestrator | Saturday 28 February 2026 01:00:24 +0000 (0:00:03.030) 0:00:36.350 ***** 2026-02-28 01:01:37.606867 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:37.606890 | orchestrator | 2026-02-28 01:01:37.606908 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-28 01:01:37.606925 | orchestrator | Saturday 28 February 2026 01:00:43 +0000 (0:00:18.158) 0:00:54.509 ***** 2026-02-28 01:01:37.606942 | orchestrator | 2026-02-28 01:01:37.606957 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-28 01:01:37.606973 | orchestrator | Saturday 28 February 2026 01:00:43 +0000 (0:00:00.080) 0:00:54.589 ***** 2026-02-28 01:01:37.606990 | orchestrator | 2026-02-28 01:01:37.607009 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-28 01:01:37.607029 | orchestrator | Saturday 28 February 2026 01:00:43 +0000 (0:00:00.066) 0:00:54.656 ***** 2026-02-28 01:01:37.607046 | orchestrator | 2026-02-28 01:01:37.607064 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-28 01:01:37.607083 | orchestrator | Saturday 28 February 2026 01:00:43 +0000 (0:00:00.069) 0:00:54.726 ***** 2026-02-28 01:01:37.607102 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:37.607131 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:01:37.607147 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:01:37.607158 | orchestrator | 2026-02-28 01:01:37.607169 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:01:37.607180 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-28 01:01:37.607191 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-28 01:01:37.607202 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-28 01:01:37.607213 | orchestrator | 2026-02-28 01:01:37.607224 | orchestrator | 2026-02-28 01:01:37.607235 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:01:37.607246 | orchestrator | Saturday 28 February 2026 01:01:36 +0000 (0:00:52.779) 0:01:47.505 ***** 2026-02-28 01:01:37.607257 | orchestrator | =============================================================================== 2026-02-28 01:01:37.607268 | orchestrator | horizon : Restart horizon container ------------------------------------ 52.78s 2026-02-28 01:01:37.607278 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 18.16s 2026-02-28 01:01:37.607289 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.10s 2026-02-28 01:01:37.607300 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 3.03s 2026-02-28 01:01:37.607311 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.67s 2026-02-28 01:01:37.607321 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.18s 2026-02-28 01:01:37.607342 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.18s 2026-02-28 01:01:37.607354 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.75s 2026-02-28 01:01:37.607364 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.69s 2026-02-28 01:01:37.607375 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.61s 2026-02-28 01:01:37.607386 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.20s 2026-02-28 01:01:37.607397 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.85s 2026-02-28 01:01:37.607407 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.85s 2026-02-28 01:01:37.607418 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.85s 2026-02-28 01:01:37.607429 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.73s 2026-02-28 01:01:37.607440 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.69s 2026-02-28 01:01:37.607451 | orchestrator | horizon : Update policy file name --------------------------------------- 0.61s 2026-02-28 01:01:37.607467 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2026-02-28 01:01:37.607484 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.59s 2026-02-28 01:01:37.607510 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-02-28 01:01:37.607530 | orchestrator | 2026-02-28 01:01:37 | INFO  | Task 69a94cb4-2054-4292-b680-ee775cad5eaa is in state SUCCESS 2026-02-28 01:01:37.607559 | orchestrator | 2026-02-28 01:01:37 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:01:37.607577 | orchestrator | 2026-02-28 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:40.654759 | orchestrator | 2026-02-28 01:01:40 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:40.656504 | orchestrator | 2026-02-28 01:01:40 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:01:40.656545 | orchestrator | 2026-02-28 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:43.706555 | orchestrator | 2026-02-28 01:01:43 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:43.709097 | orchestrator | 2026-02-28 01:01:43 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:01:43.709167 | orchestrator | 2026-02-28 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:46.761494 | orchestrator | 2026-02-28 01:01:46 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:46.762763 | orchestrator | 2026-02-28 01:01:46 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:01:46.763351 | orchestrator | 2026-02-28 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:49.805359 | orchestrator | 2026-02-28 01:01:49 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:49.806956 | orchestrator | 2026-02-28 01:01:49 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:01:49.807057 | orchestrator | 2026-02-28 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:52.851832 | orchestrator | 2026-02-28 01:01:52 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:52.854800 | orchestrator | 2026-02-28 01:01:52 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:01:52.854857 | orchestrator | 2026-02-28 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:55.906134 | orchestrator | 2026-02-28 01:01:55 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:55.908183 | orchestrator | 2026-02-28 01:01:55 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:01:55.908227 | orchestrator | 2026-02-28 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:58.956182 | orchestrator | 2026-02-28 01:01:58 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:01:58.957795 | orchestrator | 2026-02-28 01:01:58 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:01:58.957857 | orchestrator | 2026-02-28 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:02.002085 | orchestrator | 2026-02-28 01:02:01 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:02:02.003527 | orchestrator | 2026-02-28 01:02:02 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:02:02.003815 | orchestrator | 2026-02-28 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:05.047124 | orchestrator | 2026-02-28 01:02:05 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:02:05.049520 | orchestrator | 2026-02-28 01:02:05 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:02:05.049592 | orchestrator | 2026-02-28 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:08.110111 | orchestrator | 2026-02-28 01:02:08 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:02:08.114091 | orchestrator | 2026-02-28 01:02:08 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:02:08.114148 | orchestrator | 2026-02-28 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:11.162313 | orchestrator | 2026-02-28 01:02:11 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:02:11.164625 | orchestrator | 2026-02-28 01:02:11 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:02:11.164773 | orchestrator | 2026-02-28 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:14.213548 | orchestrator | 2026-02-28 01:02:14 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:02:14.215029 | orchestrator | 2026-02-28 01:02:14 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:02:14.215348 | orchestrator | 2026-02-28 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:17.250950 | orchestrator | 2026-02-28 01:02:17 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:02:17.252822 | orchestrator | 2026-02-28 01:02:17 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:02:17.253037 | orchestrator | 2026-02-28 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:20.304001 | orchestrator | 2026-02-28 01:02:20 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:02:20.305713 | orchestrator | 2026-02-28 01:02:20 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:02:20.305772 | orchestrator | 2026-02-28 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:23.350089 | orchestrator | 2026-02-28 01:02:23 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:02:23.350762 | orchestrator | 2026-02-28 01:02:23 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:02:23.350811 | orchestrator | 2026-02-28 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:26.385030 | orchestrator | 2026-02-28 01:02:26 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:02:26.386124 | orchestrator | 2026-02-28 01:02:26 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:02:26.386181 | orchestrator | 2026-02-28 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:29.421660 | orchestrator | 2026-02-28 01:02:29 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:02:29.423932 | orchestrator | 2026-02-28 01:02:29 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:02:29.424128 | orchestrator | 2026-02-28 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:32.474598 | orchestrator | 2026-02-28 01:02:32 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:02:32.476444 | orchestrator | 2026-02-28 01:02:32 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:02:32.476465 | orchestrator | 2026-02-28 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:35.529439 | orchestrator | 2026-02-28 01:02:35 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:02:35.529732 | orchestrator | 2026-02-28 01:02:35 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:02:35.529758 | orchestrator | 2026-02-28 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:38.579801 | orchestrator | 2026-02-28 01:02:38 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state STARTED 2026-02-28 01:02:38.582829 | orchestrator | 2026-02-28 01:02:38 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state STARTED 2026-02-28 01:02:38.582897 | orchestrator | 2026-02-28 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:41.616956 | orchestrator | 2026-02-28 01:02:41 | INFO  | Task fad2846c-fad1-48df-8790-dd3e360bf4e1 is in state SUCCESS 2026-02-28 01:02:41.617133 | orchestrator | 2026-02-28 01:02:41.617149 | orchestrator | 2026-02-28 01:02:41.617156 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-28 01:02:41.617163 | orchestrator | 2026-02-28 01:02:41.617170 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-28 01:02:41.617177 | orchestrator | Saturday 28 February 2026 01:00:59 +0000 (0:00:00.184) 0:00:00.184 ***** 2026-02-28 01:02:41.617184 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-28 01:02:41.617192 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.617198 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.617205 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:02:41.617211 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.617217 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-28 01:02:41.617224 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-28 01:02:41.617230 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-28 01:02:41.617236 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-28 01:02:41.617242 | orchestrator | 2026-02-28 01:02:41.617268 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-28 01:02:41.617298 | orchestrator | Saturday 28 February 2026 01:01:04 +0000 (0:00:05.146) 0:00:05.331 ***** 2026-02-28 01:02:41.617325 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-28 01:02:41.617332 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.617338 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.617344 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:02:41.617350 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.617356 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-28 01:02:41.617363 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-28 01:02:41.617369 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-28 01:02:41.617375 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-28 01:02:41.617381 | orchestrator | 2026-02-28 01:02:41.617387 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-28 01:02:41.617439 | orchestrator | Saturday 28 February 2026 01:01:08 +0000 (0:00:04.508) 0:00:09.839 ***** 2026-02-28 01:02:41.617446 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-28 01:02:41.617453 | orchestrator | 2026-02-28 01:02:41.617459 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-28 01:02:41.617477 | orchestrator | Saturday 28 February 2026 01:01:09 +0000 (0:00:01.152) 0:00:10.992 ***** 2026-02-28 01:02:41.617484 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-28 01:02:41.617490 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.617496 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.617503 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:02:41.617509 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.617515 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-28 01:02:41.617521 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-28 01:02:41.617527 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-28 01:02:41.617533 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-28 01:02:41.617540 | orchestrator | 2026-02-28 01:02:41.617546 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-28 01:02:41.617552 | orchestrator | Saturday 28 February 2026 01:01:24 +0000 (0:00:14.306) 0:00:25.299 ***** 2026-02-28 01:02:41.617558 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-28 01:02:41.617565 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-28 01:02:41.617571 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-28 01:02:41.617593 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-28 01:02:41.617599 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-28 01:02:41.617605 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-28 01:02:41.617625 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-28 01:02:41.617698 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-28 01:02:41.617710 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-28 01:02:41.617720 | orchestrator | 2026-02-28 01:02:41.617756 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-28 01:02:41.617769 | orchestrator | Saturday 28 February 2026 01:01:27 +0000 (0:00:03.287) 0:00:28.586 ***** 2026-02-28 01:02:41.617778 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-28 01:02:41.617897 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.617950 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.617963 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:02:41.617973 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.617982 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-28 01:02:41.617993 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-28 01:02:41.618005 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-28 01:02:41.618068 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-28 01:02:41.618083 | orchestrator | 2026-02-28 01:02:41.618090 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:02:41.618144 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:02:41.618153 | orchestrator | 2026-02-28 01:02:41.618159 | orchestrator | 2026-02-28 01:02:41.618166 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:02:41.618172 | orchestrator | Saturday 28 February 2026 01:01:34 +0000 (0:00:07.299) 0:00:35.886 ***** 2026-02-28 01:02:41.618209 | orchestrator | =============================================================================== 2026-02-28 01:02:41.618215 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.31s 2026-02-28 01:02:41.618222 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.30s 2026-02-28 01:02:41.618228 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.15s 2026-02-28 01:02:41.618234 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.51s 2026-02-28 01:02:41.618241 | orchestrator | Check if target directories exist --------------------------------------- 3.29s 2026-02-28 01:02:41.618273 | orchestrator | Create share directory -------------------------------------------------- 1.15s 2026-02-28 01:02:41.618294 | orchestrator | 2026-02-28 01:02:41.618679 | orchestrator | 2026-02-28 01:02:41.618752 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:02:41.618762 | orchestrator | 2026-02-28 01:02:41.618770 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:02:41.618777 | orchestrator | Saturday 28 February 2026 00:59:48 +0000 (0:00:00.279) 0:00:00.279 ***** 2026-02-28 01:02:41.618784 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:02:41.618793 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:02:41.618801 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:02:41.618809 | orchestrator | 2026-02-28 01:02:41.618826 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:02:41.618834 | orchestrator | Saturday 28 February 2026 00:59:49 +0000 (0:00:00.365) 0:00:00.645 ***** 2026-02-28 01:02:41.618842 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-28 01:02:41.618850 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-28 01:02:41.618858 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-28 01:02:41.618865 | orchestrator | 2026-02-28 01:02:41.618886 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-28 01:02:41.618894 | orchestrator | 2026-02-28 01:02:41.618902 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:02:41.618910 | orchestrator | Saturday 28 February 2026 00:59:49 +0000 (0:00:00.453) 0:00:01.098 ***** 2026-02-28 01:02:41.618919 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:02:41.618926 | orchestrator | 2026-02-28 01:02:41.618933 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-28 01:02:41.618941 | orchestrator | Saturday 28 February 2026 00:59:50 +0000 (0:00:00.568) 0:00:01.667 ***** 2026-02-28 01:02:41.618954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.618965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.618985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.618998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619050 | orchestrator | 2026-02-28 01:02:41.619058 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-28 01:02:41.619069 | orchestrator | Saturday 28 February 2026 00:59:52 +0000 (0:00:02.024) 0:00:03.692 ***** 2026-02-28 01:02:41.619077 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.619090 | orchestrator | 2026-02-28 01:02:41.619098 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-28 01:02:41.619105 | orchestrator | Saturday 28 February 2026 00:59:52 +0000 (0:00:00.146) 0:00:03.838 ***** 2026-02-28 01:02:41.619113 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.619121 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.619129 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.619140 | orchestrator | 2026-02-28 01:02:41.619148 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-28 01:02:41.619156 | orchestrator | Saturday 28 February 2026 00:59:52 +0000 (0:00:00.529) 0:00:04.368 ***** 2026-02-28 01:02:41.619163 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:02:41.619172 | orchestrator | 2026-02-28 01:02:41.619180 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:02:41.619188 | orchestrator | Saturday 28 February 2026 00:59:53 +0000 (0:00:00.849) 0:00:05.218 ***** 2026-02-28 01:02:41.619197 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:02:41.619205 | orchestrator | 2026-02-28 01:02:41.619214 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-28 01:02:41.619222 | orchestrator | Saturday 28 February 2026 00:59:54 +0000 (0:00:00.609) 0:00:05.828 ***** 2026-02-28 01:02:41.619231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.619240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.619254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.619273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619327 | orchestrator | 2026-02-28 01:02:41.619336 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-28 01:02:41.619344 | orchestrator | Saturday 28 February 2026 00:59:58 +0000 (0:00:03.657) 0:00:09.486 ***** 2026-02-28 01:02:41.619362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.619372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.619380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.619388 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.619398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.619412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.619429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.619438 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.619446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.619456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.619465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.619473 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.619481 | orchestrator | 2026-02-28 01:02:41.619490 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-28 01:02:41.619499 | orchestrator | Saturday 28 February 2026 00:59:58 +0000 (0:00:00.644) 0:00:10.130 ***** 2026-02-28 01:02:41.619512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.619529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.619539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.619547 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.619557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.619566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.619581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.619588 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.619603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.619610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.619617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.619625 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.619648 | orchestrator | 2026-02-28 01:02:41.619655 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-28 01:02:41.619661 | orchestrator | Saturday 28 February 2026 00:59:59 +0000 (0:00:00.945) 0:00:11.076 ***** 2026-02-28 01:02:41.619667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.619680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.619698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.619706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619765 | orchestrator | 2026-02-28 01:02:41.619773 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-28 01:02:41.619780 | orchestrator | Saturday 28 February 2026 01:00:03 +0000 (0:00:03.628) 0:00:14.705 ***** 2026-02-28 01:02:41.619788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.619795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.619808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.619816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.619832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.619840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.619847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.619875 | orchestrator | 2026-02-28 01:02:41.619882 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-28 01:02:41.619889 | orchestrator | Saturday 28 February 2026 01:00:10 +0000 (0:00:06.700) 0:00:21.406 ***** 2026-02-28 01:02:41.619896 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.619904 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:02:41.619911 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:02:41.619918 | orchestrator | 2026-02-28 01:02:41.619925 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-28 01:02:41.619932 | orchestrator | Saturday 28 February 2026 01:00:11 +0000 (0:00:01.783) 0:00:23.190 ***** 2026-02-28 01:02:41.619939 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.619947 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.619954 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.619962 | orchestrator | 2026-02-28 01:02:41.619969 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-28 01:02:41.619979 | orchestrator | Saturday 28 February 2026 01:00:12 +0000 (0:00:00.578) 0:00:23.768 ***** 2026-02-28 01:02:41.619987 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.619993 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.620001 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.620008 | orchestrator | 2026-02-28 01:02:41.620014 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-28 01:02:41.620021 | orchestrator | Saturday 28 February 2026 01:00:12 +0000 (0:00:00.310) 0:00:24.078 ***** 2026-02-28 01:02:41.620031 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.620038 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.620046 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.620052 | orchestrator | 2026-02-28 01:02:41.620060 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-28 01:02:41.620067 | orchestrator | Saturday 28 February 2026 01:00:13 +0000 (0:00:00.571) 0:00:24.650 ***** 2026-02-28 01:02:41.620075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.620087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.620095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.620103 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.620110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.620125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.620134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.620147 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.620155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.620163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.620171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.620178 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.620186 | orchestrator | 2026-02-28 01:02:41.620193 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:02:41.620200 | orchestrator | Saturday 28 February 2026 01:00:13 +0000 (0:00:00.631) 0:00:25.281 ***** 2026-02-28 01:02:41.620207 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.620215 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.620222 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.620229 | orchestrator | 2026-02-28 01:02:41.620236 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-28 01:02:41.620243 | orchestrator | Saturday 28 February 2026 01:00:14 +0000 (0:00:00.408) 0:00:25.690 ***** 2026-02-28 01:02:41.620251 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-28 01:02:41.620262 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-28 01:02:41.620269 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-28 01:02:41.620277 | orchestrator | 2026-02-28 01:02:41.620284 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-28 01:02:41.620291 | orchestrator | Saturday 28 February 2026 01:00:15 +0000 (0:00:01.575) 0:00:27.266 ***** 2026-02-28 01:02:41.620305 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:02:41.620312 | orchestrator | 2026-02-28 01:02:41.620322 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-28 01:02:41.620329 | orchestrator | Saturday 28 February 2026 01:00:17 +0000 (0:00:01.278) 0:00:28.544 ***** 2026-02-28 01:02:41.620336 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.620343 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.620350 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.620358 | orchestrator | 2026-02-28 01:02:41.620366 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-28 01:02:41.620375 | orchestrator | Saturday 28 February 2026 01:00:18 +0000 (0:00:00.877) 0:00:29.421 ***** 2026-02-28 01:02:41.620383 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-28 01:02:41.620392 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-28 01:02:41.620401 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:02:41.620410 | orchestrator | 2026-02-28 01:02:41.620418 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-28 01:02:41.620426 | orchestrator | Saturday 28 February 2026 01:00:19 +0000 (0:00:01.195) 0:00:30.616 ***** 2026-02-28 01:02:41.620434 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:02:41.620442 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:02:41.620452 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:02:41.620460 | orchestrator | 2026-02-28 01:02:41.620466 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-28 01:02:41.620472 | orchestrator | Saturday 28 February 2026 01:00:19 +0000 (0:00:00.345) 0:00:30.962 ***** 2026-02-28 01:02:41.620478 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-28 01:02:41.620484 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-28 01:02:41.620490 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-28 01:02:41.620496 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-28 01:02:41.620502 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-28 01:02:41.620508 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-28 01:02:41.620514 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-28 01:02:41.620521 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-28 01:02:41.620527 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-28 01:02:41.620534 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-28 01:02:41.620540 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-28 01:02:41.620546 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-28 01:02:41.620553 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-28 01:02:41.620559 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-28 01:02:41.620566 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-28 01:02:41.620572 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-28 01:02:41.620578 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-28 01:02:41.620584 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-28 01:02:41.620589 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-28 01:02:41.620603 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-28 01:02:41.620610 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-28 01:02:41.620616 | orchestrator | 2026-02-28 01:02:41.620623 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-28 01:02:41.620641 | orchestrator | Saturday 28 February 2026 01:00:28 +0000 (0:00:09.320) 0:00:40.282 ***** 2026-02-28 01:02:41.620649 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-28 01:02:41.620655 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-28 01:02:41.620662 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-28 01:02:41.620667 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-28 01:02:41.620673 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-28 01:02:41.620684 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-28 01:02:41.620691 | orchestrator | 2026-02-28 01:02:41.620698 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-28 01:02:41.620706 | orchestrator | Saturday 28 February 2026 01:00:32 +0000 (0:00:03.150) 0:00:43.432 ***** 2026-02-28 01:02:41.620717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.620726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.620735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.620748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.620763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.620771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.620779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.620786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.620794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.620807 | orchestrator | 2026-02-28 01:02:41.620814 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:02:41.620822 | orchestrator | Saturday 28 February 2026 01:00:34 +0000 (0:00:02.532) 0:00:45.965 ***** 2026-02-28 01:02:41.620829 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.620836 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.620844 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.620851 | orchestrator | 2026-02-28 01:02:41.620858 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-28 01:02:41.620865 | orchestrator | Saturday 28 February 2026 01:00:34 +0000 (0:00:00.380) 0:00:46.345 ***** 2026-02-28 01:02:41.620873 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.620880 | orchestrator | 2026-02-28 01:02:41.620887 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-28 01:02:41.620895 | orchestrator | Saturday 28 February 2026 01:00:37 +0000 (0:00:02.378) 0:00:48.724 ***** 2026-02-28 01:02:41.620901 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.620909 | orchestrator | 2026-02-28 01:02:41.620916 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-28 01:02:41.620923 | orchestrator | Saturday 28 February 2026 01:00:39 +0000 (0:00:02.339) 0:00:51.063 ***** 2026-02-28 01:02:41.620930 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:02:41.620937 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:02:41.620944 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:02:41.620951 | orchestrator | 2026-02-28 01:02:41.620958 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-28 01:02:41.621076 | orchestrator | Saturday 28 February 2026 01:00:40 +0000 (0:00:01.236) 0:00:52.300 ***** 2026-02-28 01:02:41.621088 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:02:41.621096 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:02:41.621104 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:02:41.621111 | orchestrator | 2026-02-28 01:02:41.621119 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-28 01:02:41.621126 | orchestrator | Saturday 28 February 2026 01:00:41 +0000 (0:00:00.345) 0:00:52.645 ***** 2026-02-28 01:02:41.621133 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.621142 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.621153 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.621161 | orchestrator | 2026-02-28 01:02:41.621168 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-28 01:02:41.621176 | orchestrator | Saturday 28 February 2026 01:00:41 +0000 (0:00:00.391) 0:00:53.037 ***** 2026-02-28 01:02:41.621183 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.621191 | orchestrator | 2026-02-28 01:02:41.621198 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-28 01:02:41.621205 | orchestrator | Saturday 28 February 2026 01:00:58 +0000 (0:00:16.579) 0:01:09.617 ***** 2026-02-28 01:02:41.621212 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.621220 | orchestrator | 2026-02-28 01:02:41.621227 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-28 01:02:41.621235 | orchestrator | Saturday 28 February 2026 01:01:10 +0000 (0:00:12.139) 0:01:21.756 ***** 2026-02-28 01:02:41.621243 | orchestrator | 2026-02-28 01:02:41.621250 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-28 01:02:41.621257 | orchestrator | Saturday 28 February 2026 01:01:10 +0000 (0:00:00.089) 0:01:21.846 ***** 2026-02-28 01:02:41.621264 | orchestrator | 2026-02-28 01:02:41.621272 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-28 01:02:41.621285 | orchestrator | Saturday 28 February 2026 01:01:10 +0000 (0:00:00.073) 0:01:21.920 ***** 2026-02-28 01:02:41.621292 | orchestrator | 2026-02-28 01:02:41.621300 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-28 01:02:41.621308 | orchestrator | Saturday 28 February 2026 01:01:10 +0000 (0:00:00.068) 0:01:21.988 ***** 2026-02-28 01:02:41.621315 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.621323 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:02:41.621330 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:02:41.621337 | orchestrator | 2026-02-28 01:02:41.621345 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-28 01:02:41.621352 | orchestrator | Saturday 28 February 2026 01:01:28 +0000 (0:00:17.457) 0:01:39.445 ***** 2026-02-28 01:02:41.621359 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.621367 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:02:41.621374 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:02:41.621381 | orchestrator | 2026-02-28 01:02:41.621388 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-28 01:02:41.621396 | orchestrator | Saturday 28 February 2026 01:01:33 +0000 (0:00:05.070) 0:01:44.515 ***** 2026-02-28 01:02:41.621403 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.621410 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:02:41.621418 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:02:41.621425 | orchestrator | 2026-02-28 01:02:41.621433 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:02:41.621440 | orchestrator | Saturday 28 February 2026 01:01:45 +0000 (0:00:12.096) 0:01:56.611 ***** 2026-02-28 01:02:41.621448 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:02:41.621455 | orchestrator | 2026-02-28 01:02:41.621462 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-28 01:02:41.621469 | orchestrator | Saturday 28 February 2026 01:01:46 +0000 (0:00:00.908) 0:01:57.520 ***** 2026-02-28 01:02:41.621477 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:02:41.621484 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:02:41.621491 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:02:41.621498 | orchestrator | 2026-02-28 01:02:41.621506 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-28 01:02:41.621513 | orchestrator | Saturday 28 February 2026 01:01:46 +0000 (0:00:00.785) 0:01:58.305 ***** 2026-02-28 01:02:41.621520 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.621527 | orchestrator | 2026-02-28 01:02:41.621535 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-28 01:02:41.621542 | orchestrator | Saturday 28 February 2026 01:01:48 +0000 (0:00:01.945) 0:02:00.250 ***** 2026-02-28 01:02:41.621549 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-28 01:02:41.621557 | orchestrator | 2026-02-28 01:02:41.621565 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-28 01:02:41.621572 | orchestrator | Saturday 28 February 2026 01:02:02 +0000 (0:00:13.224) 0:02:13.475 ***** 2026-02-28 01:02:41.621578 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-28 01:02:41.621584 | orchestrator | 2026-02-28 01:02:41.621590 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-28 01:02:41.621596 | orchestrator | Saturday 28 February 2026 01:02:27 +0000 (0:00:25.506) 0:02:38.981 ***** 2026-02-28 01:02:41.621603 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-28 01:02:41.621610 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-28 01:02:41.621618 | orchestrator | 2026-02-28 01:02:41.621625 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-28 01:02:41.621652 | orchestrator | Saturday 28 February 2026 01:02:34 +0000 (0:00:07.024) 0:02:46.005 ***** 2026-02-28 01:02:41.621666 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.621674 | orchestrator | 2026-02-28 01:02:41.621681 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-28 01:02:41.621688 | orchestrator | Saturday 28 February 2026 01:02:34 +0000 (0:00:00.152) 0:02:46.157 ***** 2026-02-28 01:02:41.621696 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.621703 | orchestrator | 2026-02-28 01:02:41.621716 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-28 01:02:41.621724 | orchestrator | Saturday 28 February 2026 01:02:34 +0000 (0:00:00.166) 0:02:46.324 ***** 2026-02-28 01:02:41.621733 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.621741 | orchestrator | 2026-02-28 01:02:41.621749 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-28 01:02:41.621757 | orchestrator | Saturday 28 February 2026 01:02:35 +0000 (0:00:00.153) 0:02:46.477 ***** 2026-02-28 01:02:41.621770 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.621778 | orchestrator | 2026-02-28 01:02:41.621786 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-28 01:02:41.621794 | orchestrator | Saturday 28 February 2026 01:02:35 +0000 (0:00:00.659) 0:02:47.137 ***** 2026-02-28 01:02:41.621803 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:02:41.621811 | orchestrator | 2026-02-28 01:02:41.621819 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:02:41.621828 | orchestrator | Saturday 28 February 2026 01:02:39 +0000 (0:00:03.583) 0:02:50.721 ***** 2026-02-28 01:02:41.621835 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.621843 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.621851 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.621858 | orchestrator | 2026-02-28 01:02:41.621865 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:02:41.621874 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-28 01:02:41.621881 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:02:41.621888 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:02:41.621894 | orchestrator | 2026-02-28 01:02:41.621901 | orchestrator | 2026-02-28 01:02:41.621909 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:02:41.621916 | orchestrator | Saturday 28 February 2026 01:02:39 +0000 (0:00:00.497) 0:02:51.218 ***** 2026-02-28 01:02:41.621923 | orchestrator | =============================================================================== 2026-02-28 01:02:41.621930 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.51s 2026-02-28 01:02:41.621937 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 17.46s 2026-02-28 01:02:41.621945 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.58s 2026-02-28 01:02:41.621952 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.22s 2026-02-28 01:02:41.621960 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.14s 2026-02-28 01:02:41.621967 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.10s 2026-02-28 01:02:41.621975 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.32s 2026-02-28 01:02:41.621982 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.02s 2026-02-28 01:02:41.621990 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.70s 2026-02-28 01:02:41.621996 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.07s 2026-02-28 01:02:41.622004 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.66s 2026-02-28 01:02:41.622052 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.63s 2026-02-28 01:02:41.622063 | orchestrator | keystone : Creating default user role ----------------------------------- 3.58s 2026-02-28 01:02:41.622071 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.15s 2026-02-28 01:02:41.622079 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.53s 2026-02-28 01:02:41.622087 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.38s 2026-02-28 01:02:41.622095 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.34s 2026-02-28 01:02:41.622103 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.02s 2026-02-28 01:02:41.622111 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.95s 2026-02-28 01:02:41.622120 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.78s 2026-02-28 01:02:41.622128 | orchestrator | 2026-02-28 01:02:41 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:02:41.622137 | orchestrator | 2026-02-28 01:02:41 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:02:41.622283 | orchestrator | 2026-02-28 01:02:41 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:02:41.623133 | orchestrator | 2026-02-28 01:02:41 | INFO  | Task 4d9b9a5d-57d3-4f9f-94be-bcee4f972778 is in state STARTED 2026-02-28 01:02:41.627921 | orchestrator | 2026-02-28 01:02:41 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:02:41.629387 | orchestrator | 2026-02-28 01:02:41 | INFO  | Task 0116532e-b3b6-405e-91de-757156bbe67d is in state SUCCESS 2026-02-28 01:02:41.629429 | orchestrator | 2026-02-28 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:44.664277 | orchestrator | 2026-02-28 01:02:44 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:02:44.669458 | orchestrator | 2026-02-28 01:02:44 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:02:44.672625 | orchestrator | 2026-02-28 01:02:44 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:02:44.678075 | orchestrator | 2026-02-28 01:02:44 | INFO  | Task 4d9b9a5d-57d3-4f9f-94be-bcee4f972778 is in state STARTED 2026-02-28 01:02:44.678545 | orchestrator | 2026-02-28 01:02:44 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:02:44.678581 | orchestrator | 2026-02-28 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:47.720484 | orchestrator | 2026-02-28 01:02:47 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:02:47.775527 | orchestrator | 2026-02-28 01:02:47 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:02:47.775622 | orchestrator | 2026-02-28 01:02:47 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:02:47.775679 | orchestrator | 2026-02-28 01:02:47 | INFO  | Task 4d9b9a5d-57d3-4f9f-94be-bcee4f972778 is in state STARTED 2026-02-28 01:02:47.775689 | orchestrator | 2026-02-28 01:02:47 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:02:47.775699 | orchestrator | 2026-02-28 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:50.784738 | orchestrator | 2026-02-28 01:02:50 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:02:50.784816 | orchestrator | 2026-02-28 01:02:50 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:02:50.784827 | orchestrator | 2026-02-28 01:02:50 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:02:50.784857 | orchestrator | 2026-02-28 01:02:50 | INFO  | Task 4d9b9a5d-57d3-4f9f-94be-bcee4f972778 is in state SUCCESS 2026-02-28 01:02:50.785156 | orchestrator | 2026-02-28 01:02:50 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:02:50.787416 | orchestrator | 2026-02-28 01:02:50 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:02:50.787472 | orchestrator | 2026-02-28 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:53.821680 | orchestrator | 2026-02-28 01:02:53 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:02:53.822996 | orchestrator | 2026-02-28 01:02:53 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:02:53.823556 | orchestrator | 2026-02-28 01:02:53 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:02:53.825685 | orchestrator | 2026-02-28 01:02:53 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:02:53.826118 | orchestrator | 2026-02-28 01:02:53 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:02:53.826145 | orchestrator | 2026-02-28 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:56.855416 | orchestrator | 2026-02-28 01:02:56 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:02:56.855963 | orchestrator | 2026-02-28 01:02:56 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:02:56.856957 | orchestrator | 2026-02-28 01:02:56 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:02:56.857364 | orchestrator | 2026-02-28 01:02:56 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:02:56.859666 | orchestrator | 2026-02-28 01:02:56 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:02:56.859726 | orchestrator | 2026-02-28 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:59.918084 | orchestrator | 2026-02-28 01:02:59 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:02:59.922092 | orchestrator | 2026-02-28 01:02:59 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:02:59.922790 | orchestrator | 2026-02-28 01:02:59 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:02:59.924431 | orchestrator | 2026-02-28 01:02:59 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:02:59.925774 | orchestrator | 2026-02-28 01:02:59 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:02:59.925850 | orchestrator | 2026-02-28 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:02.966808 | orchestrator | 2026-02-28 01:03:02 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:02.969159 | orchestrator | 2026-02-28 01:03:02 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:02.971975 | orchestrator | 2026-02-28 01:03:02 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:02.975152 | orchestrator | 2026-02-28 01:03:02 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:02.976752 | orchestrator | 2026-02-28 01:03:02 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:02.976846 | orchestrator | 2026-02-28 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:06.012008 | orchestrator | 2026-02-28 01:03:06 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:06.012433 | orchestrator | 2026-02-28 01:03:06 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:06.013666 | orchestrator | 2026-02-28 01:03:06 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:06.014751 | orchestrator | 2026-02-28 01:03:06 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:06.015881 | orchestrator | 2026-02-28 01:03:06 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:06.015930 | orchestrator | 2026-02-28 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:09.062543 | orchestrator | 2026-02-28 01:03:09 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:09.062644 | orchestrator | 2026-02-28 01:03:09 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:09.062652 | orchestrator | 2026-02-28 01:03:09 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:09.065593 | orchestrator | 2026-02-28 01:03:09 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:09.065697 | orchestrator | 2026-02-28 01:03:09 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:09.065712 | orchestrator | 2026-02-28 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:12.124908 | orchestrator | 2026-02-28 01:03:12 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:12.127285 | orchestrator | 2026-02-28 01:03:12 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:12.129888 | orchestrator | 2026-02-28 01:03:12 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:12.136068 | orchestrator | 2026-02-28 01:03:12 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:12.142201 | orchestrator | 2026-02-28 01:03:12 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:12.142269 | orchestrator | 2026-02-28 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:15.185787 | orchestrator | 2026-02-28 01:03:15 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:15.185900 | orchestrator | 2026-02-28 01:03:15 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:15.185923 | orchestrator | 2026-02-28 01:03:15 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:15.185941 | orchestrator | 2026-02-28 01:03:15 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:15.186260 | orchestrator | 2026-02-28 01:03:15 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:15.187322 | orchestrator | 2026-02-28 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:18.223458 | orchestrator | 2026-02-28 01:03:18 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:18.225010 | orchestrator | 2026-02-28 01:03:18 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:18.227872 | orchestrator | 2026-02-28 01:03:18 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:18.230152 | orchestrator | 2026-02-28 01:03:18 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:18.231489 | orchestrator | 2026-02-28 01:03:18 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:18.231532 | orchestrator | 2026-02-28 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:21.350084 | orchestrator | 2026-02-28 01:03:21 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:21.350192 | orchestrator | 2026-02-28 01:03:21 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:21.350210 | orchestrator | 2026-02-28 01:03:21 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:21.350221 | orchestrator | 2026-02-28 01:03:21 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:21.350234 | orchestrator | 2026-02-28 01:03:21 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:21.350248 | orchestrator | 2026-02-28 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:24.315442 | orchestrator | 2026-02-28 01:03:24 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:24.316356 | orchestrator | 2026-02-28 01:03:24 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:24.318574 | orchestrator | 2026-02-28 01:03:24 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:24.319485 | orchestrator | 2026-02-28 01:03:24 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:24.321987 | orchestrator | 2026-02-28 01:03:24 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:24.322123 | orchestrator | 2026-02-28 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:27.346487 | orchestrator | 2026-02-28 01:03:27 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:27.346784 | orchestrator | 2026-02-28 01:03:27 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:27.347490 | orchestrator | 2026-02-28 01:03:27 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:27.348250 | orchestrator | 2026-02-28 01:03:27 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:27.348837 | orchestrator | 2026-02-28 01:03:27 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:27.348875 | orchestrator | 2026-02-28 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:30.420024 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:30.420101 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:30.420111 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:30.420118 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:30.420124 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:30.420131 | orchestrator | 2026-02-28 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:33.460656 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:33.461607 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:33.462169 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:33.463359 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:33.464500 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:33.464528 | orchestrator | 2026-02-28 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:36.496210 | orchestrator | 2026-02-28 01:03:36 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:36.496366 | orchestrator | 2026-02-28 01:03:36 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:36.497002 | orchestrator | 2026-02-28 01:03:36 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:36.497861 | orchestrator | 2026-02-28 01:03:36 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:36.498625 | orchestrator | 2026-02-28 01:03:36 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:36.498775 | orchestrator | 2026-02-28 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:39.535911 | orchestrator | 2026-02-28 01:03:39 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:39.536317 | orchestrator | 2026-02-28 01:03:39 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:39.536877 | orchestrator | 2026-02-28 01:03:39 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:39.537654 | orchestrator | 2026-02-28 01:03:39 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:39.538061 | orchestrator | 2026-02-28 01:03:39 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:39.538081 | orchestrator | 2026-02-28 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:42.589256 | orchestrator | 2026-02-28 01:03:42 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:42.589334 | orchestrator | 2026-02-28 01:03:42 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:42.589344 | orchestrator | 2026-02-28 01:03:42 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:42.590683 | orchestrator | 2026-02-28 01:03:42 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:42.592240 | orchestrator | 2026-02-28 01:03:42 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:42.592261 | orchestrator | 2026-02-28 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:45.651125 | orchestrator | 2026-02-28 01:03:45 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:45.652538 | orchestrator | 2026-02-28 01:03:45 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:45.654954 | orchestrator | 2026-02-28 01:03:45 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:45.657329 | orchestrator | 2026-02-28 01:03:45 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:45.657948 | orchestrator | 2026-02-28 01:03:45 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:45.658218 | orchestrator | 2026-02-28 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:48.685367 | orchestrator | 2026-02-28 01:03:48 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:48.685732 | orchestrator | 2026-02-28 01:03:48 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:48.686564 | orchestrator | 2026-02-28 01:03:48 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:48.687478 | orchestrator | 2026-02-28 01:03:48 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:48.688355 | orchestrator | 2026-02-28 01:03:48 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:48.688400 | orchestrator | 2026-02-28 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:51.724064 | orchestrator | 2026-02-28 01:03:51 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:51.725488 | orchestrator | 2026-02-28 01:03:51 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:51.726607 | orchestrator | 2026-02-28 01:03:51 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:51.728064 | orchestrator | 2026-02-28 01:03:51 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:51.729390 | orchestrator | 2026-02-28 01:03:51 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:51.729498 | orchestrator | 2026-02-28 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:54.766780 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:54.768289 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:54.772793 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:54.775117 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:54.778349 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:54.778812 | orchestrator | 2026-02-28 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:57.821173 | orchestrator | 2026-02-28 01:03:57 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:03:57.822176 | orchestrator | 2026-02-28 01:03:57 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:03:57.823153 | orchestrator | 2026-02-28 01:03:57 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:03:57.824733 | orchestrator | 2026-02-28 01:03:57 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:03:57.825854 | orchestrator | 2026-02-28 01:03:57 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:03:57.825898 | orchestrator | 2026-02-28 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:00.887018 | orchestrator | 2026-02-28 01:04:00 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:04:00.887881 | orchestrator | 2026-02-28 01:04:00 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:00.889883 | orchestrator | 2026-02-28 01:04:00 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:00.892120 | orchestrator | 2026-02-28 01:04:00 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:00.893378 | orchestrator | 2026-02-28 01:04:00 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:00.893938 | orchestrator | 2026-02-28 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:04.008070 | orchestrator | 2026-02-28 01:04:04 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:04:04.009305 | orchestrator | 2026-02-28 01:04:04 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:04.010892 | orchestrator | 2026-02-28 01:04:04 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:04.013395 | orchestrator | 2026-02-28 01:04:04 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:04.013893 | orchestrator | 2026-02-28 01:04:04 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:04.015312 | orchestrator | 2026-02-28 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:07.040514 | orchestrator | 2026-02-28 01:04:07 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:04:07.040716 | orchestrator | 2026-02-28 01:04:07 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:07.041977 | orchestrator | 2026-02-28 01:04:07 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:07.042329 | orchestrator | 2026-02-28 01:04:07 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:07.043372 | orchestrator | 2026-02-28 01:04:07 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:07.043431 | orchestrator | 2026-02-28 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:10.090055 | orchestrator | 2026-02-28 01:04:10 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:04:10.090307 | orchestrator | 2026-02-28 01:04:10 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:10.091219 | orchestrator | 2026-02-28 01:04:10 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:10.092575 | orchestrator | 2026-02-28 01:04:10 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:10.093131 | orchestrator | 2026-02-28 01:04:10 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:10.093155 | orchestrator | 2026-02-28 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:13.122048 | orchestrator | 2026-02-28 01:04:13 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state STARTED 2026-02-28 01:04:13.123073 | orchestrator | 2026-02-28 01:04:13 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:13.125443 | orchestrator | 2026-02-28 01:04:13 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:13.128809 | orchestrator | 2026-02-28 01:04:13 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:13.131814 | orchestrator | 2026-02-28 01:04:13 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:13.131884 | orchestrator | 2026-02-28 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:16.167495 | orchestrator | 2026-02-28 01:04:16.167614 | orchestrator | 2026-02-28 01:04:16.167727 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-28 01:04:16.167758 | orchestrator | 2026-02-28 01:04:16.167778 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-28 01:04:16.167798 | orchestrator | Saturday 28 February 2026 01:01:39 +0000 (0:00:00.245) 0:00:00.245 ***** 2026-02-28 01:04:16.167839 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-28 01:04:16.167852 | orchestrator | 2026-02-28 01:04:16.167862 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-28 01:04:16.167873 | orchestrator | Saturday 28 February 2026 01:01:39 +0000 (0:00:00.256) 0:00:00.502 ***** 2026-02-28 01:04:16.167883 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-28 01:04:16.167893 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-28 01:04:16.167903 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-28 01:04:16.167913 | orchestrator | 2026-02-28 01:04:16.167923 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-28 01:04:16.167932 | orchestrator | Saturday 28 February 2026 01:01:41 +0000 (0:00:01.436) 0:00:01.939 ***** 2026-02-28 01:04:16.167943 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-28 01:04:16.167983 | orchestrator | 2026-02-28 01:04:16.167996 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-28 01:04:16.168008 | orchestrator | Saturday 28 February 2026 01:01:42 +0000 (0:00:01.587) 0:00:03.526 ***** 2026-02-28 01:04:16.168019 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:16.168031 | orchestrator | 2026-02-28 01:04:16.168042 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-28 01:04:16.168054 | orchestrator | Saturday 28 February 2026 01:01:43 +0000 (0:00:01.000) 0:00:04.527 ***** 2026-02-28 01:04:16.168066 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:16.168077 | orchestrator | 2026-02-28 01:04:16.168088 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-28 01:04:16.168099 | orchestrator | Saturday 28 February 2026 01:01:44 +0000 (0:00:01.004) 0:00:05.531 ***** 2026-02-28 01:04:16.168111 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-28 01:04:16.168122 | orchestrator | ok: [testbed-manager] 2026-02-28 01:04:16.168134 | orchestrator | 2026-02-28 01:04:16.168146 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-28 01:04:16.168157 | orchestrator | Saturday 28 February 2026 01:02:28 +0000 (0:00:43.517) 0:00:49.048 ***** 2026-02-28 01:04:16.168168 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-28 01:04:16.168180 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-28 01:04:16.168193 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-28 01:04:16.168204 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-28 01:04:16.168216 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-28 01:04:16.168227 | orchestrator | 2026-02-28 01:04:16.168239 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-28 01:04:16.168250 | orchestrator | Saturday 28 February 2026 01:02:32 +0000 (0:00:04.280) 0:00:53.329 ***** 2026-02-28 01:04:16.168260 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-28 01:04:16.168270 | orchestrator | 2026-02-28 01:04:16.168279 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-28 01:04:16.168289 | orchestrator | Saturday 28 February 2026 01:02:33 +0000 (0:00:00.511) 0:00:53.840 ***** 2026-02-28 01:04:16.168299 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:04:16.168308 | orchestrator | 2026-02-28 01:04:16.168318 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-28 01:04:16.168328 | orchestrator | Saturday 28 February 2026 01:02:33 +0000 (0:00:00.171) 0:00:54.012 ***** 2026-02-28 01:04:16.168337 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:04:16.168347 | orchestrator | 2026-02-28 01:04:16.168357 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-28 01:04:16.168367 | orchestrator | Saturday 28 February 2026 01:02:34 +0000 (0:00:00.535) 0:00:54.547 ***** 2026-02-28 01:04:16.168385 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:16.168395 | orchestrator | 2026-02-28 01:04:16.168405 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-28 01:04:16.168415 | orchestrator | Saturday 28 February 2026 01:02:35 +0000 (0:00:01.508) 0:00:56.056 ***** 2026-02-28 01:04:16.168424 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:16.168434 | orchestrator | 2026-02-28 01:04:16.168443 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-28 01:04:16.168453 | orchestrator | Saturday 28 February 2026 01:02:36 +0000 (0:00:00.779) 0:00:56.835 ***** 2026-02-28 01:04:16.168463 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:16.168472 | orchestrator | 2026-02-28 01:04:16.168482 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-28 01:04:16.168491 | orchestrator | Saturday 28 February 2026 01:02:36 +0000 (0:00:00.627) 0:00:57.463 ***** 2026-02-28 01:04:16.168501 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-28 01:04:16.168511 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-28 01:04:16.168521 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-28 01:04:16.168530 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-28 01:04:16.168540 | orchestrator | 2026-02-28 01:04:16.168550 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:04:16.168572 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 01:04:16.168583 | orchestrator | 2026-02-28 01:04:16.168593 | orchestrator | 2026-02-28 01:04:16.168620 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:04:16.168631 | orchestrator | Saturday 28 February 2026 01:02:38 +0000 (0:00:01.591) 0:00:59.054 ***** 2026-02-28 01:04:16.168672 | orchestrator | =============================================================================== 2026-02-28 01:04:16.168688 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 43.52s 2026-02-28 01:04:16.168698 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.28s 2026-02-28 01:04:16.168707 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.59s 2026-02-28 01:04:16.168717 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.59s 2026-02-28 01:04:16.168726 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.51s 2026-02-28 01:04:16.168736 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.44s 2026-02-28 01:04:16.168745 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.00s 2026-02-28 01:04:16.168755 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.00s 2026-02-28 01:04:16.168764 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.78s 2026-02-28 01:04:16.168774 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.63s 2026-02-28 01:04:16.168783 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.54s 2026-02-28 01:04:16.168793 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.51s 2026-02-28 01:04:16.168802 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2026-02-28 01:04:16.168812 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.17s 2026-02-28 01:04:16.168822 | orchestrator | 2026-02-28 01:04:16.168831 | orchestrator | 2026-02-28 01:04:16.168841 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:04:16.168851 | orchestrator | 2026-02-28 01:04:16.168860 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:04:16.168870 | orchestrator | Saturday 28 February 2026 01:02:45 +0000 (0:00:00.212) 0:00:00.212 ***** 2026-02-28 01:04:16.168880 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:04:16.168922 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:04:16.168933 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:04:16.168952 | orchestrator | 2026-02-28 01:04:16.168961 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:04:16.168971 | orchestrator | Saturday 28 February 2026 01:02:45 +0000 (0:00:00.601) 0:00:00.813 ***** 2026-02-28 01:04:16.168981 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-28 01:04:16.168990 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-28 01:04:16.169000 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-28 01:04:16.169010 | orchestrator | 2026-02-28 01:04:16.169019 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-02-28 01:04:16.169029 | orchestrator | 2026-02-28 01:04:16.169039 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-02-28 01:04:16.169048 | orchestrator | Saturday 28 February 2026 01:02:47 +0000 (0:00:01.367) 0:00:02.181 ***** 2026-02-28 01:04:16.169058 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:04:16.169068 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:04:16.169077 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:04:16.169087 | orchestrator | 2026-02-28 01:04:16.169096 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:04:16.169107 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:16.169118 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:16.169127 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:16.169137 | orchestrator | 2026-02-28 01:04:16.169147 | orchestrator | 2026-02-28 01:04:16.169157 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:04:16.169167 | orchestrator | Saturday 28 February 2026 01:02:48 +0000 (0:00:01.001) 0:00:03.183 ***** 2026-02-28 01:04:16.169176 | orchestrator | =============================================================================== 2026-02-28 01:04:16.169186 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.37s 2026-02-28 01:04:16.169195 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.00s 2026-02-28 01:04:16.169205 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.60s 2026-02-28 01:04:16.169214 | orchestrator | 2026-02-28 01:04:16.169224 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-28 01:04:16.169234 | orchestrator | 2.16.14 2026-02-28 01:04:16.169244 | orchestrator | 2026-02-28 01:04:16.169253 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-28 01:04:16.169263 | orchestrator | 2026-02-28 01:04:16.169272 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-28 01:04:16.169282 | orchestrator | Saturday 28 February 2026 01:02:43 +0000 (0:00:00.287) 0:00:00.287 ***** 2026-02-28 01:04:16.169292 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:16.169302 | orchestrator | 2026-02-28 01:04:16.169312 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-28 01:04:16.169321 | orchestrator | Saturday 28 February 2026 01:02:45 +0000 (0:00:01.552) 0:00:01.840 ***** 2026-02-28 01:04:16.169331 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:16.169340 | orchestrator | 2026-02-28 01:04:16.169356 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-28 01:04:16.169366 | orchestrator | Saturday 28 February 2026 01:02:46 +0000 (0:00:01.067) 0:00:02.907 ***** 2026-02-28 01:04:16.169384 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:16.169394 | orchestrator | 2026-02-28 01:04:16.169404 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-28 01:04:16.169413 | orchestrator | Saturday 28 February 2026 01:02:47 +0000 (0:00:00.988) 0:00:03.896 ***** 2026-02-28 01:04:16.169423 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:16.169433 | orchestrator | 2026-02-28 01:04:16.169449 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-28 01:04:16.169459 | orchestrator | Saturday 28 February 2026 01:02:48 +0000 (0:00:01.054) 0:00:04.951 ***** 2026-02-28 01:04:16.169468 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:16.169478 | orchestrator | 2026-02-28 01:04:16.169488 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-28 01:04:16.169498 | orchestrator | Saturday 28 February 2026 01:02:49 +0000 (0:00:01.125) 0:00:06.077 ***** 2026-02-28 01:04:16.169507 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:16.169517 | orchestrator | 2026-02-28 01:04:16.169526 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-28 01:04:16.169536 | orchestrator | Saturday 28 February 2026 01:02:50 +0000 (0:00:01.173) 0:00:07.250 ***** 2026-02-28 01:04:16.169546 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:16.169555 | orchestrator | 2026-02-28 01:04:16.169565 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-28 01:04:16.169575 | orchestrator | Saturday 28 February 2026 01:02:52 +0000 (0:00:02.036) 0:00:09.286 ***** 2026-02-28 01:04:16.169584 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:16.169594 | orchestrator | 2026-02-28 01:04:16.169604 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-28 01:04:16.169614 | orchestrator | Saturday 28 February 2026 01:02:54 +0000 (0:00:01.340) 0:00:10.627 ***** 2026-02-28 01:04:16.169623 | orchestrator | changed: [testbed-manager] 2026-02-28 01:04:16.169633 | orchestrator | 2026-02-28 01:04:16.169660 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-28 01:04:16.169670 | orchestrator | Saturday 28 February 2026 01:03:48 +0000 (0:00:54.459) 0:01:05.087 ***** 2026-02-28 01:04:16.169680 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:04:16.169690 | orchestrator | 2026-02-28 01:04:16.169700 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-28 01:04:16.169710 | orchestrator | 2026-02-28 01:04:16.169720 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-28 01:04:16.169729 | orchestrator | Saturday 28 February 2026 01:03:48 +0000 (0:00:00.172) 0:01:05.259 ***** 2026-02-28 01:04:16.169739 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:04:16.169749 | orchestrator | 2026-02-28 01:04:16.169759 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-28 01:04:16.169768 | orchestrator | 2026-02-28 01:04:16.169778 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-28 01:04:16.169787 | orchestrator | Saturday 28 February 2026 01:04:00 +0000 (0:00:11.685) 0:01:16.944 ***** 2026-02-28 01:04:16.169797 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:04:16.169806 | orchestrator | 2026-02-28 01:04:16.169816 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-28 01:04:16.169826 | orchestrator | 2026-02-28 01:04:16.169836 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-28 01:04:16.169845 | orchestrator | Saturday 28 February 2026 01:04:01 +0000 (0:00:01.482) 0:01:18.427 ***** 2026-02-28 01:04:16.169855 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:04:16.169865 | orchestrator | 2026-02-28 01:04:16.169874 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:04:16.169884 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 01:04:16.169894 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:16.169904 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:16.169914 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:04:16.169930 | orchestrator | 2026-02-28 01:04:16.169939 | orchestrator | 2026-02-28 01:04:16.169949 | orchestrator | 2026-02-28 01:04:16.169959 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:04:16.169969 | orchestrator | Saturday 28 February 2026 01:04:13 +0000 (0:00:11.227) 0:01:29.655 ***** 2026-02-28 01:04:16.169978 | orchestrator | =============================================================================== 2026-02-28 01:04:16.169988 | orchestrator | Create admin user ------------------------------------------------------ 54.46s 2026-02-28 01:04:16.169998 | orchestrator | Restart ceph manager service ------------------------------------------- 24.40s 2026-02-28 01:04:16.170007 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.04s 2026-02-28 01:04:16.170119 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.55s 2026-02-28 01:04:16.170131 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.34s 2026-02-28 01:04:16.170141 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.17s 2026-02-28 01:04:16.170150 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.13s 2026-02-28 01:04:16.170160 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.07s 2026-02-28 01:04:16.170170 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.05s 2026-02-28 01:04:16.170195 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.99s 2026-02-28 01:04:16.170206 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2026-02-28 01:04:16.170223 | orchestrator | 2026-02-28 01:04:16 | INFO  | Task f951a785-f861-4be8-bd40-b3b62b866cb2 is in state SUCCESS 2026-02-28 01:04:16.170443 | orchestrator | 2026-02-28 01:04:16 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:16.170529 | orchestrator | 2026-02-28 01:04:16 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:16.170541 | orchestrator | 2026-02-28 01:04:16 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:16.170549 | orchestrator | 2026-02-28 01:04:16 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:16.170558 | orchestrator | 2026-02-28 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:19.204898 | orchestrator | 2026-02-28 01:04:19 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:19.205445 | orchestrator | 2026-02-28 01:04:19 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:19.206687 | orchestrator | 2026-02-28 01:04:19 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:19.207357 | orchestrator | 2026-02-28 01:04:19 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:19.207460 | orchestrator | 2026-02-28 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:22.243070 | orchestrator | 2026-02-28 01:04:22 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:22.243142 | orchestrator | 2026-02-28 01:04:22 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:22.244144 | orchestrator | 2026-02-28 01:04:22 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:22.244867 | orchestrator | 2026-02-28 01:04:22 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:22.244889 | orchestrator | 2026-02-28 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:25.297308 | orchestrator | 2026-02-28 01:04:25 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:25.297425 | orchestrator | 2026-02-28 01:04:25 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:25.300139 | orchestrator | 2026-02-28 01:04:25 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:25.303243 | orchestrator | 2026-02-28 01:04:25 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:25.303292 | orchestrator | 2026-02-28 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:28.337992 | orchestrator | 2026-02-28 01:04:28 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:28.339271 | orchestrator | 2026-02-28 01:04:28 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:28.339781 | orchestrator | 2026-02-28 01:04:28 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:28.340904 | orchestrator | 2026-02-28 01:04:28 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:28.340933 | orchestrator | 2026-02-28 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:31.381883 | orchestrator | 2026-02-28 01:04:31 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:31.382127 | orchestrator | 2026-02-28 01:04:31 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:31.382719 | orchestrator | 2026-02-28 01:04:31 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:31.383559 | orchestrator | 2026-02-28 01:04:31 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:31.383589 | orchestrator | 2026-02-28 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:34.422986 | orchestrator | 2026-02-28 01:04:34 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:34.424164 | orchestrator | 2026-02-28 01:04:34 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:34.424919 | orchestrator | 2026-02-28 01:04:34 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:34.425935 | orchestrator | 2026-02-28 01:04:34 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:34.426082 | orchestrator | 2026-02-28 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:37.474990 | orchestrator | 2026-02-28 01:04:37 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:37.477278 | orchestrator | 2026-02-28 01:04:37 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:37.478049 | orchestrator | 2026-02-28 01:04:37 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:37.479021 | orchestrator | 2026-02-28 01:04:37 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:37.479069 | orchestrator | 2026-02-28 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:40.528490 | orchestrator | 2026-02-28 01:04:40 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:40.530830 | orchestrator | 2026-02-28 01:04:40 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:40.531655 | orchestrator | 2026-02-28 01:04:40 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:40.532849 | orchestrator | 2026-02-28 01:04:40 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:40.532930 | orchestrator | 2026-02-28 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:43.575150 | orchestrator | 2026-02-28 01:04:43 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:43.578511 | orchestrator | 2026-02-28 01:04:43 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:43.583175 | orchestrator | 2026-02-28 01:04:43 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:43.585061 | orchestrator | 2026-02-28 01:04:43 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:43.586011 | orchestrator | 2026-02-28 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:46.623951 | orchestrator | 2026-02-28 01:04:46 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:46.626720 | orchestrator | 2026-02-28 01:04:46 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:46.627078 | orchestrator | 2026-02-28 01:04:46 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:46.629267 | orchestrator | 2026-02-28 01:04:46 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:46.629321 | orchestrator | 2026-02-28 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:49.662911 | orchestrator | 2026-02-28 01:04:49 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:49.669005 | orchestrator | 2026-02-28 01:04:49 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:49.670255 | orchestrator | 2026-02-28 01:04:49 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:49.671159 | orchestrator | 2026-02-28 01:04:49 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:49.671504 | orchestrator | 2026-02-28 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:52.703974 | orchestrator | 2026-02-28 01:04:52 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:52.704436 | orchestrator | 2026-02-28 01:04:52 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:52.705486 | orchestrator | 2026-02-28 01:04:52 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:52.706411 | orchestrator | 2026-02-28 01:04:52 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:52.706479 | orchestrator | 2026-02-28 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:55.741416 | orchestrator | 2026-02-28 01:04:55 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:55.742551 | orchestrator | 2026-02-28 01:04:55 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:55.743370 | orchestrator | 2026-02-28 01:04:55 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:55.744035 | orchestrator | 2026-02-28 01:04:55 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:55.744058 | orchestrator | 2026-02-28 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:58.781159 | orchestrator | 2026-02-28 01:04:58 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:04:58.781720 | orchestrator | 2026-02-28 01:04:58 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:04:58.782664 | orchestrator | 2026-02-28 01:04:58 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:04:58.785437 | orchestrator | 2026-02-28 01:04:58 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:04:58.785480 | orchestrator | 2026-02-28 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:01.812022 | orchestrator | 2026-02-28 01:05:01 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:01.813649 | orchestrator | 2026-02-28 01:05:01 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:05:01.814718 | orchestrator | 2026-02-28 01:05:01 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:01.816054 | orchestrator | 2026-02-28 01:05:01 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:01.816337 | orchestrator | 2026-02-28 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:04.834509 | orchestrator | 2026-02-28 01:05:04 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:04.834799 | orchestrator | 2026-02-28 01:05:04 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:05:04.835532 | orchestrator | 2026-02-28 01:05:04 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:04.836541 | orchestrator | 2026-02-28 01:05:04 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:04.836602 | orchestrator | 2026-02-28 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:07.880001 | orchestrator | 2026-02-28 01:05:07 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:07.880098 | orchestrator | 2026-02-28 01:05:07 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:05:07.880122 | orchestrator | 2026-02-28 01:05:07 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:07.880141 | orchestrator | 2026-02-28 01:05:07 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:07.880160 | orchestrator | 2026-02-28 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:10.915911 | orchestrator | 2026-02-28 01:05:10 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:10.917221 | orchestrator | 2026-02-28 01:05:10 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:05:10.918271 | orchestrator | 2026-02-28 01:05:10 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:10.919201 | orchestrator | 2026-02-28 01:05:10 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:10.919315 | orchestrator | 2026-02-28 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:13.957869 | orchestrator | 2026-02-28 01:05:13 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:13.958643 | orchestrator | 2026-02-28 01:05:13 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:05:13.959515 | orchestrator | 2026-02-28 01:05:13 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:13.960268 | orchestrator | 2026-02-28 01:05:13 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:13.960342 | orchestrator | 2026-02-28 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:16.996575 | orchestrator | 2026-02-28 01:05:16 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:16.997521 | orchestrator | 2026-02-28 01:05:16 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state STARTED 2026-02-28 01:05:16.999220 | orchestrator | 2026-02-28 01:05:16 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:17.001021 | orchestrator | 2026-02-28 01:05:16 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:17.001301 | orchestrator | 2026-02-28 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:20.122264 | orchestrator | 2026-02-28 01:05:20 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:20.124158 | orchestrator | 2026-02-28 01:05:20 | INFO  | Task 63bfa6b9-9707-458a-ba24-b31070affece is in state SUCCESS 2026-02-28 01:05:20.125887 | orchestrator | 2026-02-28 01:05:20.125970 | orchestrator | 2026-02-28 01:05:20.125985 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:05:20.125997 | orchestrator | 2026-02-28 01:05:20.126008 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:05:20.126103 | orchestrator | Saturday 28 February 2026 01:02:48 +0000 (0:00:00.369) 0:00:00.369 ***** 2026-02-28 01:05:20.126111 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:05:20.126118 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:05:20.126125 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:05:20.126131 | orchestrator | 2026-02-28 01:05:20.126138 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:05:20.126145 | orchestrator | Saturday 28 February 2026 01:02:48 +0000 (0:00:00.364) 0:00:00.733 ***** 2026-02-28 01:05:20.126152 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-28 01:05:20.126159 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-28 01:05:20.126166 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-28 01:05:20.126172 | orchestrator | 2026-02-28 01:05:20.126178 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-28 01:05:20.126185 | orchestrator | 2026-02-28 01:05:20.126192 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-28 01:05:20.126198 | orchestrator | Saturday 28 February 2026 01:02:49 +0000 (0:00:00.639) 0:00:01.373 ***** 2026-02-28 01:05:20.126277 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:05:20.126285 | orchestrator | 2026-02-28 01:05:20.126291 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-28 01:05:20.126298 | orchestrator | Saturday 28 February 2026 01:02:49 +0000 (0:00:00.716) 0:00:02.090 ***** 2026-02-28 01:05:20.126305 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-28 01:05:20.126311 | orchestrator | 2026-02-28 01:05:20.126317 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-28 01:05:20.126324 | orchestrator | Saturday 28 February 2026 01:02:54 +0000 (0:00:04.533) 0:00:06.623 ***** 2026-02-28 01:05:20.126330 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-28 01:05:20.126337 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-28 01:05:20.126344 | orchestrator | 2026-02-28 01:05:20.126350 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-28 01:05:20.126356 | orchestrator | Saturday 28 February 2026 01:03:02 +0000 (0:00:07.627) 0:00:14.251 ***** 2026-02-28 01:05:20.126363 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-28 01:05:20.126369 | orchestrator | 2026-02-28 01:05:20.126376 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-28 01:05:20.126382 | orchestrator | Saturday 28 February 2026 01:03:05 +0000 (0:00:03.714) 0:00:17.965 ***** 2026-02-28 01:05:20.126388 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:05:20.126395 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-28 01:05:20.126422 | orchestrator | 2026-02-28 01:05:20.126429 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-28 01:05:20.126435 | orchestrator | Saturday 28 February 2026 01:03:10 +0000 (0:00:04.668) 0:00:22.634 ***** 2026-02-28 01:05:20.126441 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:05:20.126447 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-28 01:05:20.126454 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-28 01:05:20.126460 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-28 01:05:20.126466 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-28 01:05:20.126473 | orchestrator | 2026-02-28 01:05:20.126479 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-28 01:05:20.126485 | orchestrator | Saturday 28 February 2026 01:03:28 +0000 (0:00:18.190) 0:00:40.824 ***** 2026-02-28 01:05:20.126492 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-28 01:05:20.126498 | orchestrator | 2026-02-28 01:05:20.126504 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-28 01:05:20.126511 | orchestrator | Saturday 28 February 2026 01:03:32 +0000 (0:00:04.141) 0:00:44.966 ***** 2026-02-28 01:05:20.126532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:20.126559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:20.126569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.126579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.126592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:20.126600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.126620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.126628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.126636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.126648 | orchestrator | 2026-02-28 01:05:20.126659 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-28 01:05:20.126670 | orchestrator | Saturday 28 February 2026 01:03:35 +0000 (0:00:02.791) 0:00:47.758 ***** 2026-02-28 01:05:20.126680 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-28 01:05:20.126690 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-28 01:05:20.126754 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-28 01:05:20.126766 | orchestrator | 2026-02-28 01:05:20.126778 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-28 01:05:20.126790 | orchestrator | Saturday 28 February 2026 01:03:36 +0000 (0:00:01.167) 0:00:48.926 ***** 2026-02-28 01:05:20.126802 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:20.126815 | orchestrator | 2026-02-28 01:05:20.126827 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-28 01:05:20.126835 | orchestrator | Saturday 28 February 2026 01:03:36 +0000 (0:00:00.123) 0:00:49.049 ***** 2026-02-28 01:05:20.126841 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:20.126847 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:20.126853 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:20.126860 | orchestrator | 2026-02-28 01:05:20.126866 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-28 01:05:20.126872 | orchestrator | Saturday 28 February 2026 01:03:37 +0000 (0:00:00.534) 0:00:49.583 ***** 2026-02-28 01:05:20.126878 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:05:20.126885 | orchestrator | 2026-02-28 01:05:20.126891 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-28 01:05:20.126897 | orchestrator | Saturday 28 February 2026 01:03:38 +0000 (0:00:00.857) 0:00:50.441 ***** 2026-02-28 01:05:20.126904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:20.126922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:20.126930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:20.126942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.126949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.126956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.126971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.126978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.126989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.126996 | orchestrator | 2026-02-28 01:05:20.127002 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-28 01:05:20.127009 | orchestrator | Saturday 28 February 2026 01:03:43 +0000 (0:00:05.546) 0:00:55.988 ***** 2026-02-28 01:05:20.127015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:20.127034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.127072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.127079 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:20.127096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:20.127108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.127116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.127126 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:20.127136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:20.127146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.127163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.127170 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:20.127176 | orchestrator | 2026-02-28 01:05:20.127187 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-28 01:05:20.127205 | orchestrator | Saturday 28 February 2026 01:03:46 +0000 (0:00:02.977) 0:00:58.965 ***** 2026-02-28 01:05:20.127215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:20.127227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.127237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.127248 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:20.127255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:20.127266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.127287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.127294 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:20.127300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:20.127307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.127314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.127320 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:20.127329 | orchestrator | 2026-02-28 01:05:20.127339 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-28 01:05:20.127350 | orchestrator | Saturday 28 February 2026 01:03:48 +0000 (0:00:01.398) 0:01:00.364 ***** 2026-02-28 01:05:20.127363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:20.127670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:20.127723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:20.127736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.127747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.127758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.127797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.127806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.127812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.127819 | orchestrator | 2026-02-28 01:05:20.127826 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-28 01:05:20.127833 | orchestrator | Saturday 28 February 2026 01:03:52 +0000 (0:00:04.147) 0:01:04.511 ***** 2026-02-28 01:05:20.127839 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:20.127846 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:05:20.127852 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:05:20.127858 | orchestrator | 2026-02-28 01:05:20.127865 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-28 01:05:20.127871 | orchestrator | Saturday 28 February 2026 01:03:55 +0000 (0:00:03.650) 0:01:08.162 ***** 2026-02-28 01:05:20.127878 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:05:20.127884 | orchestrator | 2026-02-28 01:05:20.127890 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-28 01:05:20.127896 | orchestrator | Saturday 28 February 2026 01:04:00 +0000 (0:00:04.388) 0:01:12.550 ***** 2026-02-28 01:05:20.127903 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:20.127909 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:20.127915 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:20.127921 | orchestrator | 2026-02-28 01:05:20.127928 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-28 01:05:20.127934 | orchestrator | Saturday 28 February 2026 01:04:02 +0000 (0:00:01.886) 0:01:14.437 ***** 2026-02-28 01:05:20.127940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:20.127959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:20.127967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:20.127973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.127980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.127986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.128001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.128015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.128022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.128029 | orchestrator | 2026-02-28 01:05:20.128035 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-28 01:05:20.128042 | orchestrator | Saturday 28 February 2026 01:04:14 +0000 (0:00:12.319) 0:01:26.756 ***** 2026-02-28 01:05:20.128048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:20.128055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:20.128066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.128077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.128087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.128094 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:20.128101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.128107 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:20.128114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:20.128125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.128132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:20.128139 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:20.128145 | orchestrator | 2026-02-28 01:05:20.128152 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-28 01:05:20.128158 | orchestrator | Saturday 28 February 2026 01:04:16 +0000 (0:00:01.856) 0:01:28.612 ***** 2026-02-28 01:05:20.128173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:20.128180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.128187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:20.128198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:20.128208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.128221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.128228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.128235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.128241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:20.128256 | orchestrator | 2026-02-28 01:05:20.128264 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-28 01:05:20.128271 | orchestrator | Saturday 28 February 2026 01:04:21 +0000 (0:00:04.922) 0:01:33.535 ***** 2026-02-28 01:05:20.128278 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:20.128285 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:20.128292 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:20.128299 | orchestrator | 2026-02-28 01:05:20.128307 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-28 01:05:20.128314 | orchestrator | Saturday 28 February 2026 01:04:22 +0000 (0:00:00.890) 0:01:34.425 ***** 2026-02-28 01:05:20.128321 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:20.128328 | orchestrator | 2026-02-28 01:05:20.128335 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-28 01:05:20.128343 | orchestrator | Saturday 28 February 2026 01:04:24 +0000 (0:00:02.394) 0:01:36.820 ***** 2026-02-28 01:05:20.128350 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:20.128357 | orchestrator | 2026-02-28 01:05:20.128365 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-28 01:05:20.128372 | orchestrator | Saturday 28 February 2026 01:04:27 +0000 (0:00:02.796) 0:01:39.616 ***** 2026-02-28 01:05:20.128380 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:20.128387 | orchestrator | 2026-02-28 01:05:20.128394 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-28 01:05:20.128401 | orchestrator | Saturday 28 February 2026 01:04:41 +0000 (0:00:13.903) 0:01:53.520 ***** 2026-02-28 01:05:20.128408 | orchestrator | 2026-02-28 01:05:20.128415 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-28 01:05:20.128421 | orchestrator | Saturday 28 February 2026 01:04:41 +0000 (0:00:00.092) 0:01:53.612 ***** 2026-02-28 01:05:20.128428 | orchestrator | 2026-02-28 01:05:20.128434 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-28 01:05:20.128440 | orchestrator | Saturday 28 February 2026 01:04:41 +0000 (0:00:00.093) 0:01:53.706 ***** 2026-02-28 01:05:20.128446 | orchestrator | 2026-02-28 01:05:20.128453 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-28 01:05:20.128459 | orchestrator | Saturday 28 February 2026 01:04:41 +0000 (0:00:00.092) 0:01:53.798 ***** 2026-02-28 01:05:20.128465 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:05:20.128471 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:20.128481 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:05:20.128487 | orchestrator | 2026-02-28 01:05:20.128494 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-28 01:05:20.128500 | orchestrator | Saturday 28 February 2026 01:04:57 +0000 (0:00:15.704) 0:02:09.502 ***** 2026-02-28 01:05:20.128506 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:05:20.128512 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:20.128522 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:05:20.128529 | orchestrator | 2026-02-28 01:05:20.128535 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-28 01:05:20.128542 | orchestrator | Saturday 28 February 2026 01:05:08 +0000 (0:00:11.276) 0:02:20.779 ***** 2026-02-28 01:05:20.128548 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:05:20.128554 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:05:20.128560 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:20.128566 | orchestrator | 2026-02-28 01:05:20.128573 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:05:20.128580 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:05:20.128592 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:05:20.128598 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:05:20.128605 | orchestrator | 2026-02-28 01:05:20.128611 | orchestrator | 2026-02-28 01:05:20.128617 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:05:20.128624 | orchestrator | Saturday 28 February 2026 01:05:18 +0000 (0:00:09.954) 0:02:30.734 ***** 2026-02-28 01:05:20.128630 | orchestrator | =============================================================================== 2026-02-28 01:05:20.128636 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 18.19s 2026-02-28 01:05:20.128642 | orchestrator | barbican : Restart barbican-api container ------------------------------ 15.70s 2026-02-28 01:05:20.128649 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.90s 2026-02-28 01:05:20.128655 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 12.32s 2026-02-28 01:05:20.128661 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.28s 2026-02-28 01:05:20.128667 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 9.95s 2026-02-28 01:05:20.128673 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.63s 2026-02-28 01:05:20.128680 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 5.55s 2026-02-28 01:05:20.128686 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.92s 2026-02-28 01:05:20.128708 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.67s 2026-02-28 01:05:20.128720 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.53s 2026-02-28 01:05:20.128730 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 4.39s 2026-02-28 01:05:20.128737 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.15s 2026-02-28 01:05:20.128743 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.14s 2026-02-28 01:05:20.128749 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.71s 2026-02-28 01:05:20.128755 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.65s 2026-02-28 01:05:20.128761 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.98s 2026-02-28 01:05:20.128768 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.80s 2026-02-28 01:05:20.128775 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.79s 2026-02-28 01:05:20.128785 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.39s 2026-02-28 01:05:20.128795 | orchestrator | 2026-02-28 01:05:20 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:20.128805 | orchestrator | 2026-02-28 01:05:20 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:20.128815 | orchestrator | 2026-02-28 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:23.165728 | orchestrator | 2026-02-28 01:05:23 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:23.166769 | orchestrator | 2026-02-28 01:05:23 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:23.171273 | orchestrator | 2026-02-28 01:05:23 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:05:23.174850 | orchestrator | 2026-02-28 01:05:23 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:23.174934 | orchestrator | 2026-02-28 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:26.245596 | orchestrator | 2026-02-28 01:05:26 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:26.247944 | orchestrator | 2026-02-28 01:05:26 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:26.251528 | orchestrator | 2026-02-28 01:05:26 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:05:26.254209 | orchestrator | 2026-02-28 01:05:26 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:26.254307 | orchestrator | 2026-02-28 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:29.354875 | orchestrator | 2026-02-28 01:05:29 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:29.354961 | orchestrator | 2026-02-28 01:05:29 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:29.354983 | orchestrator | 2026-02-28 01:05:29 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:05:29.357650 | orchestrator | 2026-02-28 01:05:29 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:29.357690 | orchestrator | 2026-02-28 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:32.384586 | orchestrator | 2026-02-28 01:05:32 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:32.385179 | orchestrator | 2026-02-28 01:05:32 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:32.385995 | orchestrator | 2026-02-28 01:05:32 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:05:32.387201 | orchestrator | 2026-02-28 01:05:32 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:32.387310 | orchestrator | 2026-02-28 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:35.431928 | orchestrator | 2026-02-28 01:05:35 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:35.432751 | orchestrator | 2026-02-28 01:05:35 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:35.437143 | orchestrator | 2026-02-28 01:05:35 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:05:35.441603 | orchestrator | 2026-02-28 01:05:35 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:35.441805 | orchestrator | 2026-02-28 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:38.478681 | orchestrator | 2026-02-28 01:05:38 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:38.482131 | orchestrator | 2026-02-28 01:05:38 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:38.483046 | orchestrator | 2026-02-28 01:05:38 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:05:38.484467 | orchestrator | 2026-02-28 01:05:38 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:38.484516 | orchestrator | 2026-02-28 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:41.609756 | orchestrator | 2026-02-28 01:05:41 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:41.612088 | orchestrator | 2026-02-28 01:05:41 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:41.613152 | orchestrator | 2026-02-28 01:05:41 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:05:41.614102 | orchestrator | 2026-02-28 01:05:41 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:41.614220 | orchestrator | 2026-02-28 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:44.720508 | orchestrator | 2026-02-28 01:05:44 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:44.721053 | orchestrator | 2026-02-28 01:05:44 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:44.722196 | orchestrator | 2026-02-28 01:05:44 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:05:44.722760 | orchestrator | 2026-02-28 01:05:44 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:44.722788 | orchestrator | 2026-02-28 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:47.755834 | orchestrator | 2026-02-28 01:05:47 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:47.757709 | orchestrator | 2026-02-28 01:05:47 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:47.759115 | orchestrator | 2026-02-28 01:05:47 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:05:47.761537 | orchestrator | 2026-02-28 01:05:47 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:47.761598 | orchestrator | 2026-02-28 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:50.860032 | orchestrator | 2026-02-28 01:05:50 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:50.861157 | orchestrator | 2026-02-28 01:05:50 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:50.862783 | orchestrator | 2026-02-28 01:05:50 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:05:50.863872 | orchestrator | 2026-02-28 01:05:50 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:50.863898 | orchestrator | 2026-02-28 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:53.926583 | orchestrator | 2026-02-28 01:05:53 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:53.927415 | orchestrator | 2026-02-28 01:05:53 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:53.930861 | orchestrator | 2026-02-28 01:05:53 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:05:53.931808 | orchestrator | 2026-02-28 01:05:53 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:53.931976 | orchestrator | 2026-02-28 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:56.971627 | orchestrator | 2026-02-28 01:05:56 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:05:56.972212 | orchestrator | 2026-02-28 01:05:56 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:05:56.973180 | orchestrator | 2026-02-28 01:05:56 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:05:56.974800 | orchestrator | 2026-02-28 01:05:56 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:05:56.974911 | orchestrator | 2026-02-28 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:00.029620 | orchestrator | 2026-02-28 01:06:00 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:00.030506 | orchestrator | 2026-02-28 01:06:00 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:00.031496 | orchestrator | 2026-02-28 01:06:00 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:06:00.032554 | orchestrator | 2026-02-28 01:06:00 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:06:00.032821 | orchestrator | 2026-02-28 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:03.071924 | orchestrator | 2026-02-28 01:06:03 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:03.072308 | orchestrator | 2026-02-28 01:06:03 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:03.073011 | orchestrator | 2026-02-28 01:06:03 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:06:03.074158 | orchestrator | 2026-02-28 01:06:03 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:06:03.074216 | orchestrator | 2026-02-28 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:06.105938 | orchestrator | 2026-02-28 01:06:06 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:06.106268 | orchestrator | 2026-02-28 01:06:06 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:06.107088 | orchestrator | 2026-02-28 01:06:06 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:06:06.108504 | orchestrator | 2026-02-28 01:06:06 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:06:06.108564 | orchestrator | 2026-02-28 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:09.139839 | orchestrator | 2026-02-28 01:06:09 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:09.140673 | orchestrator | 2026-02-28 01:06:09 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:09.142904 | orchestrator | 2026-02-28 01:06:09 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:06:09.144374 | orchestrator | 2026-02-28 01:06:09 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:06:09.145385 | orchestrator | 2026-02-28 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:12.191805 | orchestrator | 2026-02-28 01:06:12 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:12.192564 | orchestrator | 2026-02-28 01:06:12 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:12.193601 | orchestrator | 2026-02-28 01:06:12 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state STARTED 2026-02-28 01:06:12.194317 | orchestrator | 2026-02-28 01:06:12 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:06:12.194352 | orchestrator | 2026-02-28 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:15.234755 | orchestrator | 2026-02-28 01:06:15 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:15.239750 | orchestrator | 2026-02-28 01:06:15 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:15.241484 | orchestrator | 2026-02-28 01:06:15 | INFO  | Task 347d50b2-873a-4025-92a9-155dd5c02c4c is in state SUCCESS 2026-02-28 01:06:15.242751 | orchestrator | 2026-02-28 01:06:15 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:06:15.242817 | orchestrator | 2026-02-28 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:18.298264 | orchestrator | 2026-02-28 01:06:18 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:06:18.298567 | orchestrator | 2026-02-28 01:06:18 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:18.299283 | orchestrator | 2026-02-28 01:06:18 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:18.300394 | orchestrator | 2026-02-28 01:06:18 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:06:18.300440 | orchestrator | 2026-02-28 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:21.484531 | orchestrator | 2026-02-28 01:06:21 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:06:21.484792 | orchestrator | 2026-02-28 01:06:21 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:21.485804 | orchestrator | 2026-02-28 01:06:21 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:21.486575 | orchestrator | 2026-02-28 01:06:21 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:06:21.486608 | orchestrator | 2026-02-28 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:24.541853 | orchestrator | 2026-02-28 01:06:24 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:06:24.543276 | orchestrator | 2026-02-28 01:06:24 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:24.544454 | orchestrator | 2026-02-28 01:06:24 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:24.545072 | orchestrator | 2026-02-28 01:06:24 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:06:24.545108 | orchestrator | 2026-02-28 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:27.586653 | orchestrator | 2026-02-28 01:06:27 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:06:27.588083 | orchestrator | 2026-02-28 01:06:27 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:27.589277 | orchestrator | 2026-02-28 01:06:27 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:27.591593 | orchestrator | 2026-02-28 01:06:27 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:06:27.591655 | orchestrator | 2026-02-28 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:30.637635 | orchestrator | 2026-02-28 01:06:30 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:06:30.638542 | orchestrator | 2026-02-28 01:06:30 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:30.642630 | orchestrator | 2026-02-28 01:06:30 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:30.644000 | orchestrator | 2026-02-28 01:06:30 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:06:30.644143 | orchestrator | 2026-02-28 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:33.725018 | orchestrator | 2026-02-28 01:06:33 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:06:33.725562 | orchestrator | 2026-02-28 01:06:33 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:33.726766 | orchestrator | 2026-02-28 01:06:33 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:33.727964 | orchestrator | 2026-02-28 01:06:33 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:06:33.728000 | orchestrator | 2026-02-28 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:36.774173 | orchestrator | 2026-02-28 01:06:36 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:06:36.775632 | orchestrator | 2026-02-28 01:06:36 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:36.777232 | orchestrator | 2026-02-28 01:06:36 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:36.779117 | orchestrator | 2026-02-28 01:06:36 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:06:36.779207 | orchestrator | 2026-02-28 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:39.826290 | orchestrator | 2026-02-28 01:06:39 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:06:39.832884 | orchestrator | 2026-02-28 01:06:39 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:39.834495 | orchestrator | 2026-02-28 01:06:39 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:39.835988 | orchestrator | 2026-02-28 01:06:39 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:06:39.836052 | orchestrator | 2026-02-28 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:42.882198 | orchestrator | 2026-02-28 01:06:42 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:06:42.883233 | orchestrator | 2026-02-28 01:06:42 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:42.885408 | orchestrator | 2026-02-28 01:06:42 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:42.887430 | orchestrator | 2026-02-28 01:06:42 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state STARTED 2026-02-28 01:06:42.888393 | orchestrator | 2026-02-28 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:45.942875 | orchestrator | 2026-02-28 01:06:45 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:06:45.945227 | orchestrator | 2026-02-28 01:06:45 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:06:45.946907 | orchestrator | 2026-02-28 01:06:45 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:45.948875 | orchestrator | 2026-02-28 01:06:45 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:45.955288 | orchestrator | 2026-02-28 01:06:45.955360 | orchestrator | 2026-02-28 01:06:45.955370 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-02-28 01:06:45.955379 | orchestrator | 2026-02-28 01:06:45.955386 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-02-28 01:06:45.955394 | orchestrator | Saturday 28 February 2026 01:05:31 +0000 (0:00:00.196) 0:00:00.196 ***** 2026-02-28 01:06:45.955401 | orchestrator | changed: [localhost] 2026-02-28 01:06:45.955408 | orchestrator | 2026-02-28 01:06:45.955415 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-02-28 01:06:45.955421 | orchestrator | Saturday 28 February 2026 01:05:32 +0000 (0:00:00.754) 0:00:00.950 ***** 2026-02-28 01:06:45.955428 | orchestrator | changed: [localhost] 2026-02-28 01:06:45.955435 | orchestrator | 2026-02-28 01:06:45.955441 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-02-28 01:06:45.955448 | orchestrator | Saturday 28 February 2026 01:06:05 +0000 (0:00:33.052) 0:00:34.002 ***** 2026-02-28 01:06:45.955473 | orchestrator | changed: [localhost] 2026-02-28 01:06:45.955479 | orchestrator | 2026-02-28 01:06:45.955486 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:06:45.955492 | orchestrator | 2026-02-28 01:06:45.955499 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:06:45.955505 | orchestrator | Saturday 28 February 2026 01:06:12 +0000 (0:00:06.749) 0:00:40.752 ***** 2026-02-28 01:06:45.955512 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:06:45.955519 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:06:45.955525 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:06:45.955532 | orchestrator | 2026-02-28 01:06:45.955538 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:06:45.955577 | orchestrator | Saturday 28 February 2026 01:06:12 +0000 (0:00:00.440) 0:00:41.192 ***** 2026-02-28 01:06:45.955584 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-02-28 01:06:45.955591 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-02-28 01:06:45.955598 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-02-28 01:06:45.955605 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-02-28 01:06:45.955611 | orchestrator | 2026-02-28 01:06:45.955617 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-02-28 01:06:45.955645 | orchestrator | skipping: no hosts matched 2026-02-28 01:06:45.955654 | orchestrator | 2026-02-28 01:06:45.955661 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:06:45.955667 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:06:45.955676 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:06:45.955684 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:06:45.955691 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:06:45.955720 | orchestrator | 2026-02-28 01:06:45.955727 | orchestrator | 2026-02-28 01:06:45.955733 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:06:45.955739 | orchestrator | Saturday 28 February 2026 01:06:13 +0000 (0:00:01.102) 0:00:42.294 ***** 2026-02-28 01:06:45.955746 | orchestrator | =============================================================================== 2026-02-28 01:06:45.955752 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 33.05s 2026-02-28 01:06:45.955758 | orchestrator | Download ironic-agent kernel -------------------------------------------- 6.75s 2026-02-28 01:06:45.955792 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.10s 2026-02-28 01:06:45.955798 | orchestrator | Ensure the destination directory exists --------------------------------- 0.75s 2026-02-28 01:06:45.955805 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2026-02-28 01:06:45.955811 | orchestrator | 2026-02-28 01:06:45.955817 | orchestrator | 2026-02-28 01:06:45.955823 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:06:45.955829 | orchestrator | 2026-02-28 01:06:45.955836 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:06:45.955842 | orchestrator | Saturday 28 February 2026 01:02:55 +0000 (0:00:00.342) 0:00:00.342 ***** 2026-02-28 01:06:45.955848 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:06:45.955854 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:06:45.955861 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:06:45.955868 | orchestrator | 2026-02-28 01:06:45.955875 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:06:45.955883 | orchestrator | Saturday 28 February 2026 01:02:55 +0000 (0:00:00.312) 0:00:00.655 ***** 2026-02-28 01:06:45.955897 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-28 01:06:45.955908 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-28 01:06:45.955919 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-28 01:06:45.955928 | orchestrator | 2026-02-28 01:06:45.955935 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-28 01:06:45.955943 | orchestrator | 2026-02-28 01:06:45.955950 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-28 01:06:45.955957 | orchestrator | Saturday 28 February 2026 01:02:56 +0000 (0:00:00.831) 0:00:01.487 ***** 2026-02-28 01:06:45.955964 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:06:45.955972 | orchestrator | 2026-02-28 01:06:45.955979 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-28 01:06:45.955986 | orchestrator | Saturday 28 February 2026 01:02:57 +0000 (0:00:01.130) 0:00:02.617 ***** 2026-02-28 01:06:45.956007 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-28 01:06:45.956018 | orchestrator | 2026-02-28 01:06:45.956028 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-28 01:06:45.956043 | orchestrator | Saturday 28 February 2026 01:03:01 +0000 (0:00:04.269) 0:00:06.887 ***** 2026-02-28 01:06:45.956056 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-28 01:06:45.956066 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-28 01:06:45.956076 | orchestrator | 2026-02-28 01:06:45.956087 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-28 01:06:45.956096 | orchestrator | Saturday 28 February 2026 01:03:09 +0000 (0:00:07.236) 0:00:14.124 ***** 2026-02-28 01:06:45.956107 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:06:45.956260 | orchestrator | 2026-02-28 01:06:45.956273 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-28 01:06:45.956283 | orchestrator | Saturday 28 February 2026 01:03:12 +0000 (0:00:03.686) 0:00:17.810 ***** 2026-02-28 01:06:45.956292 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:06:45.956302 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-28 01:06:45.956313 | orchestrator | 2026-02-28 01:06:45.956323 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-28 01:06:45.956333 | orchestrator | Saturday 28 February 2026 01:03:17 +0000 (0:00:05.029) 0:00:22.839 ***** 2026-02-28 01:06:45.956350 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:06:45.956361 | orchestrator | 2026-02-28 01:06:45.956371 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-28 01:06:45.956381 | orchestrator | Saturday 28 February 2026 01:03:23 +0000 (0:00:05.367) 0:00:28.207 ***** 2026-02-28 01:06:45.956392 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-28 01:06:45.956402 | orchestrator | 2026-02-28 01:06:45.956411 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-28 01:06:45.956421 | orchestrator | Saturday 28 February 2026 01:03:27 +0000 (0:00:03.948) 0:00:32.155 ***** 2026-02-28 01:06:45.956435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:45.956460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.956468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.956488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:45.956496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.956507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:45.956514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.956526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.956532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.956573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.956582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.956596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.956606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.956630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.956671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.956683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.956776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.956796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.956807 | orchestrator | 2026-02-28 01:06:45.956817 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-28 01:06:45.956828 | orchestrator | Saturday 28 February 2026 01:03:31 +0000 (0:00:04.310) 0:00:36.466 ***** 2026-02-28 01:06:45.956837 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:45.956848 | orchestrator | 2026-02-28 01:06:45.956858 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-28 01:06:45.956867 | orchestrator | Saturday 28 February 2026 01:03:31 +0000 (0:00:00.172) 0:00:36.638 ***** 2026-02-28 01:06:45.956885 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:45.956896 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:45.956905 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:45.956912 | orchestrator | 2026-02-28 01:06:45.956918 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-28 01:06:45.956924 | orchestrator | Saturday 28 February 2026 01:03:32 +0000 (0:00:00.723) 0:00:37.362 ***** 2026-02-28 01:06:45.956931 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:06:45.956937 | orchestrator | 2026-02-28 01:06:45.956943 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-28 01:06:45.956950 | orchestrator | Saturday 28 February 2026 01:03:34 +0000 (0:00:02.537) 0:00:39.900 ***** 2026-02-28 01:06:45.956956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:45.956966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:45.956979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:45.956990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.957003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.957010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.957016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.957026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.957044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.957051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.957072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.957079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.957086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.957092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.957099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.957109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.957120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.957131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.957137 | orchestrator | 2026-02-28 01:06:45.957143 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-28 01:06:45.957150 | orchestrator | Saturday 28 February 2026 01:03:41 +0000 (0:00:07.016) 0:00:46.916 ***** 2026-02-28 01:06:45.957157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:45.957163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:45.957202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeo2026-02-28 01:06:45 | INFO  | Task 0ab486a7-bcb8-4405-b04e-6d3c5799a295 is in state SUCCESS 2026-02-28 01:06:45.957535 | orchestrator | ut': '30'}}})  2026-02-28 01:06:45.957557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957591 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:45.957598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:45.957604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:45.957611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957653 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:45.957659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:45.957666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:45.957672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957732 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:45.957739 | orchestrator | 2026-02-28 01:06:45.957745 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-28 01:06:45.957751 | orchestrator | Saturday 28 February 2026 01:03:45 +0000 (0:00:03.179) 0:00:50.095 ***** 2026-02-28 01:06:45.957758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:45.957764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:45.957771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957810 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:45.957817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:45.957828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:45.957841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957904 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:45.957913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:45.957922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:45.957932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.957997 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:45.958007 | orchestrator | 2026-02-28 01:06:45.958067 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-28 01:06:45.958077 | orchestrator | Saturday 28 February 2026 01:03:47 +0000 (0:00:02.640) 0:00:52.736 ***** 2026-02-28 01:06:45.958083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:45.958090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:45.958110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:45.958121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.958128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.958134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.958141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.958152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.958164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.958173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.958184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.958192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.958200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.958207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.958956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959060 | orchestrator | 2026-02-28 01:06:45.959071 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-28 01:06:45.959081 | orchestrator | Saturday 28 February 2026 01:03:54 +0000 (0:00:06.935) 0:00:59.672 ***** 2026-02-28 01:06:45.959090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:45.959119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:45.959142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:45.959153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959332 | orchestrator | 2026-02-28 01:06:45.959341 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-28 01:06:45.959350 | orchestrator | Saturday 28 February 2026 01:04:24 +0000 (0:00:29.440) 0:01:29.113 ***** 2026-02-28 01:06:45.959358 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-28 01:06:45.959367 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-28 01:06:45.959376 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-28 01:06:45.959384 | orchestrator | 2026-02-28 01:06:45.959391 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-28 01:06:45.959407 | orchestrator | Saturday 28 February 2026 01:04:33 +0000 (0:00:08.938) 0:01:38.051 ***** 2026-02-28 01:06:45.959415 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-28 01:06:45.959424 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-28 01:06:45.959432 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-28 01:06:45.959441 | orchestrator | 2026-02-28 01:06:45.959449 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-28 01:06:45.959457 | orchestrator | Saturday 28 February 2026 01:04:37 +0000 (0:00:04.482) 0:01:42.533 ***** 2026-02-28 01:06:45.959466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:45.959482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:45.959492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:45.959506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959687 | orchestrator | 2026-02-28 01:06:45.959695 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-28 01:06:45.959730 | orchestrator | Saturday 28 February 2026 01:04:43 +0000 (0:00:06.231) 0:01:48.765 ***** 2026-02-28 01:06:45.959740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:45.959755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:45.959765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:45.959778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.959921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.959956 | orchestrator | 2026-02-28 01:06:45.959964 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-28 01:06:45.959973 | orchestrator | Saturday 28 February 2026 01:04:49 +0000 (0:00:05.354) 0:01:54.119 ***** 2026-02-28 01:06:45.959982 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:45.959991 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:45.960000 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:45.960008 | orchestrator | 2026-02-28 01:06:45.960018 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-28 01:06:45.960027 | orchestrator | Saturday 28 February 2026 01:04:49 +0000 (0:00:00.570) 0:01:54.690 ***** 2026-02-28 01:06:45.960036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:45.960052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:45.960061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.960113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.960125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.960135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.960145 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:45.960154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:45.960170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:45.960180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.960197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.960210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.960219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.960229 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:45.960238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:45.960252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:45.960261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.960277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.960291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.960300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:45.960309 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:45.960317 | orchestrator | 2026-02-28 01:06:45.960327 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-28 01:06:45.960336 | orchestrator | Saturday 28 February 2026 01:04:52 +0000 (0:00:02.451) 0:01:57.141 ***** 2026-02-28 01:06:45.960345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:45.960361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:45.960379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:45.960394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.960404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.960414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:45.960424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.960446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.960456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.960469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.960478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.960486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.960494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.960508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.960523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.960531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.960543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.960552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:45.960560 | orchestrator | 2026-02-28 01:06:45.960568 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-28 01:06:45.960577 | orchestrator | Saturday 28 February 2026 01:04:58 +0000 (0:00:06.333) 0:02:03.475 ***** 2026-02-28 01:06:45.960586 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:45.960596 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:45.960605 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:45.960613 | orchestrator | 2026-02-28 01:06:45.960622 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-28 01:06:45.960631 | orchestrator | Saturday 28 February 2026 01:04:59 +0000 (0:00:01.008) 0:02:04.483 ***** 2026-02-28 01:06:45.960640 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-28 01:06:45.960649 | orchestrator | 2026-02-28 01:06:45.960657 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-28 01:06:45.960666 | orchestrator | Saturday 28 February 2026 01:05:02 +0000 (0:00:02.659) 0:02:07.143 ***** 2026-02-28 01:06:45.960675 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 01:06:45.960690 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-28 01:06:45.960769 | orchestrator | 2026-02-28 01:06:45.960780 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-28 01:06:45.960790 | orchestrator | Saturday 28 February 2026 01:05:04 +0000 (0:00:02.507) 0:02:09.651 ***** 2026-02-28 01:06:45.960799 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:45.960807 | orchestrator | 2026-02-28 01:06:45.960817 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-28 01:06:45.960826 | orchestrator | Saturday 28 February 2026 01:05:24 +0000 (0:00:19.789) 0:02:29.441 ***** 2026-02-28 01:06:45.960832 | orchestrator | 2026-02-28 01:06:45.960837 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-28 01:06:45.960842 | orchestrator | Saturday 28 February 2026 01:05:24 +0000 (0:00:00.324) 0:02:29.765 ***** 2026-02-28 01:06:45.960848 | orchestrator | 2026-02-28 01:06:45.960853 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-28 01:06:45.960859 | orchestrator | Saturday 28 February 2026 01:05:25 +0000 (0:00:00.261) 0:02:30.027 ***** 2026-02-28 01:06:45.960864 | orchestrator | 2026-02-28 01:06:45.960870 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-28 01:06:45.961003 | orchestrator | Saturday 28 February 2026 01:05:25 +0000 (0:00:00.235) 0:02:30.263 *****2026-02-28 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:45.961015 | orchestrator | 2026-02-28 01:06:45.961020 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:45.961026 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:45.961031 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:45.961036 | orchestrator | 2026-02-28 01:06:45.961042 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-28 01:06:45.961047 | orchestrator | Saturday 28 February 2026 01:05:41 +0000 (0:00:16.529) 0:02:46.792 ***** 2026-02-28 01:06:45.961053 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:45.961058 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:45.961063 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:45.961068 | orchestrator | 2026-02-28 01:06:45.961074 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-28 01:06:45.961079 | orchestrator | Saturday 28 February 2026 01:05:56 +0000 (0:00:14.325) 0:03:01.118 ***** 2026-02-28 01:06:45.961084 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:45.961089 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:45.961095 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:45.961100 | orchestrator | 2026-02-28 01:06:45.961105 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-28 01:06:45.961111 | orchestrator | Saturday 28 February 2026 01:06:08 +0000 (0:00:12.633) 0:03:13.752 ***** 2026-02-28 01:06:45.961116 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:45.961121 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:45.961126 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:45.961132 | orchestrator | 2026-02-28 01:06:45.961137 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-28 01:06:45.961142 | orchestrator | Saturday 28 February 2026 01:06:17 +0000 (0:00:08.592) 0:03:22.344 ***** 2026-02-28 01:06:45.961147 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:45.961153 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:45.961158 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:45.961164 | orchestrator | 2026-02-28 01:06:45.961169 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-28 01:06:45.961174 | orchestrator | Saturday 28 February 2026 01:06:26 +0000 (0:00:09.012) 0:03:31.357 ***** 2026-02-28 01:06:45.961185 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:45.961191 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:45.961196 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:45.961202 | orchestrator | 2026-02-28 01:06:45.961207 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-28 01:06:45.961219 | orchestrator | Saturday 28 February 2026 01:06:33 +0000 (0:00:07.357) 0:03:38.715 ***** 2026-02-28 01:06:45.961225 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:45.961230 | orchestrator | 2026-02-28 01:06:45.961235 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:06:45.961241 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:06:45.961247 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:06:45.961253 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:06:45.961258 | orchestrator | 2026-02-28 01:06:45.961264 | orchestrator | 2026-02-28 01:06:45.961269 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:06:45.961274 | orchestrator | Saturday 28 February 2026 01:06:42 +0000 (0:00:08.868) 0:03:47.583 ***** 2026-02-28 01:06:45.961280 | orchestrator | =============================================================================== 2026-02-28 01:06:45.961285 | orchestrator | designate : Copying over designate.conf -------------------------------- 29.44s 2026-02-28 01:06:45.961290 | orchestrator | designate : Running Designate bootstrap container ---------------------- 19.79s 2026-02-28 01:06:45.961296 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 16.53s 2026-02-28 01:06:45.961301 | orchestrator | designate : Restart designate-api container ---------------------------- 14.33s 2026-02-28 01:06:45.961306 | orchestrator | designate : Restart designate-central container ------------------------ 12.63s 2026-02-28 01:06:45.961311 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.01s 2026-02-28 01:06:45.961316 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.94s 2026-02-28 01:06:45.961321 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.87s 2026-02-28 01:06:45.961326 | orchestrator | designate : Restart designate-producer container ------------------------ 8.59s 2026-02-28 01:06:45.961330 | orchestrator | designate : Restart designate-worker container -------------------------- 7.36s 2026-02-28 01:06:45.961335 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.24s 2026-02-28 01:06:45.961340 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.02s 2026-02-28 01:06:45.961345 | orchestrator | designate : Copying over config.json files for services ----------------- 6.94s 2026-02-28 01:06:45.961350 | orchestrator | designate : Check designate containers ---------------------------------- 6.33s 2026-02-28 01:06:45.961355 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 6.23s 2026-02-28 01:06:45.961360 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 5.37s 2026-02-28 01:06:45.961365 | orchestrator | designate : Copying over rndc.key --------------------------------------- 5.35s 2026-02-28 01:06:45.961370 | orchestrator | service-ks-register : designate | Creating users ------------------------ 5.03s 2026-02-28 01:06:45.961380 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.48s 2026-02-28 01:06:45.961386 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.31s 2026-02-28 01:06:49.000995 | orchestrator | 2026-02-28 01:06:48 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:06:49.007230 | orchestrator | 2026-02-28 01:06:49 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:06:49.015958 | orchestrator | 2026-02-28 01:06:49 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:49.019445 | orchestrator | 2026-02-28 01:06:49 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:49.019563 | orchestrator | 2026-02-28 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:52.057138 | orchestrator | 2026-02-28 01:06:52 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:06:52.059720 | orchestrator | 2026-02-28 01:06:52 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:06:52.059855 | orchestrator | 2026-02-28 01:06:52 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:52.061724 | orchestrator | 2026-02-28 01:06:52 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:52.062164 | orchestrator | 2026-02-28 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:55.115733 | orchestrator | 2026-02-28 01:06:55 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:06:55.115982 | orchestrator | 2026-02-28 01:06:55 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:06:55.117221 | orchestrator | 2026-02-28 01:06:55 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:55.118203 | orchestrator | 2026-02-28 01:06:55 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:55.118234 | orchestrator | 2026-02-28 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:58.149616 | orchestrator | 2026-02-28 01:06:58 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:06:58.153210 | orchestrator | 2026-02-28 01:06:58 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:06:58.154694 | orchestrator | 2026-02-28 01:06:58 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:06:58.157110 | orchestrator | 2026-02-28 01:06:58 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:06:58.157379 | orchestrator | 2026-02-28 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:01.211985 | orchestrator | 2026-02-28 01:07:01 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:07:01.214948 | orchestrator | 2026-02-28 01:07:01 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:01.218073 | orchestrator | 2026-02-28 01:07:01 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:01.220642 | orchestrator | 2026-02-28 01:07:01 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state STARTED 2026-02-28 01:07:01.221114 | orchestrator | 2026-02-28 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:04.256169 | orchestrator | 2026-02-28 01:07:04 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:07:04.256923 | orchestrator | 2026-02-28 01:07:04 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:04.258071 | orchestrator | 2026-02-28 01:07:04 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:04.261214 | orchestrator | 2026-02-28 01:07:04 | INFO  | Task 4bd189e6-4bc6-4c59-b939-d606eb0f966f is in state SUCCESS 2026-02-28 01:07:04.261495 | orchestrator | 2026-02-28 01:07:04.263418 | orchestrator | 2026-02-28 01:07:04.263491 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:07:04.263501 | orchestrator | 2026-02-28 01:07:04.263509 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:07:04.263531 | orchestrator | Saturday 28 February 2026 01:02:44 +0000 (0:00:00.318) 0:00:00.318 ***** 2026-02-28 01:07:04.263555 | orchestrator | ok: [testbed-manager] 2026-02-28 01:07:04.263585 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:07:04.263594 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:07:04.263601 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:07:04.263608 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:07:04.263615 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:07:04.263622 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:07:04.263630 | orchestrator | 2026-02-28 01:07:04.263637 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:07:04.263644 | orchestrator | Saturday 28 February 2026 01:02:46 +0000 (0:00:01.702) 0:00:02.021 ***** 2026-02-28 01:07:04.263652 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-28 01:07:04.263660 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-28 01:07:04.263667 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-28 01:07:04.263675 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-28 01:07:04.263682 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-28 01:07:04.263689 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-28 01:07:04.263765 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-28 01:07:04.263774 | orchestrator | 2026-02-28 01:07:04.263781 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-28 01:07:04.263789 | orchestrator | 2026-02-28 01:07:04.263796 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-28 01:07:04.263803 | orchestrator | Saturday 28 February 2026 01:02:47 +0000 (0:00:01.331) 0:00:03.353 ***** 2026-02-28 01:07:04.263812 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:07:04.263820 | orchestrator | 2026-02-28 01:07:04.263828 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-28 01:07:04.263835 | orchestrator | Saturday 28 February 2026 01:02:49 +0000 (0:00:02.158) 0:00:05.511 ***** 2026-02-28 01:07:04.263937 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-28 01:07:04.263951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.263960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.263968 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.263997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.264005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.264013 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.264020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264046 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264078 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264088 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.264097 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264118 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264247 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-28 01:07:04.264259 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264299 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264313 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264386 | orchestrator | 2026-02-28 01:07:04.264394 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-28 01:07:04.264402 | orchestrator | Saturday 28 February 2026 01:02:53 +0000 (0:00:04.055) 0:00:09.567 ***** 2026-02-28 01:07:04.264410 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:07:04.264417 | orchestrator | 2026-02-28 01:07:04.264425 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-28 01:07:04.264432 | orchestrator | Saturday 28 February 2026 01:02:55 +0000 (0:00:01.687) 0:00:11.255 ***** 2026-02-28 01:07:04.264444 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-28 01:07:04.264469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.264477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.264490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.264512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.264520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.264527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.264539 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.264547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264581 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264589 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264597 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264604 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.264673 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264687 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264760 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264770 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-28 01:07:04.264783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.264804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.265949 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.265983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.265992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.266000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.266008 | orchestrator | 2026-02-28 01:07:04.266058 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-28 01:07:04.266080 | orchestrator | Saturday 28 February 2026 01:03:01 +0000 (0:00:06.350) 0:00:17.606 ***** 2026-02-28 01:07:04.266097 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-28 01:07:04.266106 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:07:04.266114 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:07:04.266142 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-28 01:07:04.266151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266177 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:07:04.266228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:07:04.266277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266313 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:07:04.266321 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:04.266328 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:04.266341 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:04.266349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:07:04.266383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266403 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:07:04.266415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:07:04.266426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:07:04.266445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266498 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:07:04.266509 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:07:04.266520 | orchestrator | 2026-02-28 01:07:04.266537 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-28 01:07:04.266550 | orchestrator | Saturday 28 February 2026 01:03:03 +0000 (0:00:01.788) 0:00:19.395 ***** 2026-02-28 01:07:04.266568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:07:04.266582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:07:04.266666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266770 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-28 01:07:04.266780 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:07:04.266795 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266806 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-28 01:07:04.266821 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:04.266828 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:04.266836 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266843 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:07:04.266851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:07:04.266859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266886 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:07:04.266894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:07:04.266901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:07:04.266936 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:04.266943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:07:04.266951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.266976 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:07:04.266984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:07:04.266991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.267003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:07:04.267011 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:07:04.267018 | orchestrator | 2026-02-28 01:07:04.267026 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-28 01:07:04.267034 | orchestrator | Saturday 28 February 2026 01:03:05 +0000 (0:00:02.119) 0:00:21.514 ***** 2026-02-28 01:07:04.267041 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-28 01:07:04.267049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.267066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.267074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.267081 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.267089 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.267100 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.267108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.267116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.267124 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.267141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.267148 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.267156 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.267164 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.267175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.267183 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.267191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.267204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.267217 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.267226 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-28 01:07:04.267234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.267245 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.267253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.267266 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.267277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.267285 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.267292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.267299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.267306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.267313 | orchestrator | 2026-02-28 01:07:04.267323 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-28 01:07:04.267330 | orchestrator | Saturday 28 February 2026 01:03:12 +0000 (0:00:06.907) 0:00:28.422 ***** 2026-02-28 01:07:04.267338 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 01:07:04.267345 | orchestrator | 2026-02-28 01:07:04.267351 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-28 01:07:04.267358 | orchestrator | Saturday 28 February 2026 01:03:14 +0000 (0:00:01.504) 0:00:29.927 ***** 2026-02-28 01:07:04.267372 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1086636, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2471364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267380 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1086636, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2471364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267391 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1086679, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.260245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267398 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1086636, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2471364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267405 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1086679, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.260245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267412 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1086636, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2471364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.267422 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1086636, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2471364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267434 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1086636, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2471364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267441 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1086632, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2461271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267452 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1086636, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2471364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267459 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1086679, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.260245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267466 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1086632, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2461271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267473 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1086679, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.260245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267484 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1086679, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.260245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267499 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1086649, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2581272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267506 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1086632, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2461271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267517 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1086679, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.260245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267524 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1086627, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2441735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267531 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1086632, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2461271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267538 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1086649, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2581272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267549 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1086649, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2581272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267560 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1086627, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2441735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267567 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1086637, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2477362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267659 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1086649, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2581272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267675 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1086637, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2477362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267687 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1086679, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.260245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.267719 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1086632, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2461271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267746 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1086632, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2461271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267758 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1086627, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2441735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267770 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1086627, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2441735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267784 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1086647, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2491271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267791 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1086637, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2477362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267798 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1086649, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2581272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267805 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1086639, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2479613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267821 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1086649, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2581272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267828 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1086647, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2491271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267836 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1086634, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2471364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267843 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1086647, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2491271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267854 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1086637, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2477362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267862 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1086639, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2479613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267869 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086678, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2595387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267885 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1086647, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2491271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267897 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1086627, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2441735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267909 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1086627, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2441735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267920 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1086639, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2479613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267938 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1086634, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2471364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267951 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1086639, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2479613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267970 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1086632, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2461271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.267982 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086622, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.243127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267989 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1086637, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2477362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.267996 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086678, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2595387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268003 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1086634, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2471364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268014 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1086634, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2471364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268021 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1086693, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.262218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268036 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1086637, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2477362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268046 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1086647, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2491271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268054 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086678, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2595387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268061 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086622, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.243127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268068 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086678, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2595387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268078 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1086639, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2479613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268087 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1086674, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.258817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268109 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086622, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.243127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268131 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1086693, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.262218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268142 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1086647, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2491271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268153 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1086649, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2581272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.268164 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1086693, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.262218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268181 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086630, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2446024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268212 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086622, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.243127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268232 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1086634, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2471364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268249 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1086639, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2479613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268260 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1086674, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.258817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268267 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1086674, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.258817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268274 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086630, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2446024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268286 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086630, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2446024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268298 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1086624, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2438457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268305 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1086693, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.262218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268316 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1086624, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2438457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268324 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086678, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2595387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268331 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1086643, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2491207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268338 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1086674, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.258817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268349 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1086634, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2471364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268361 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086630, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2446024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268368 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1086624, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2438457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268379 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1086627, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2441735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.268386 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1086643, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2491207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268393 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086678, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2595387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268400 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1086624, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2438457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268412 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1086641, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.248203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268424 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086622, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.243127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268431 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1086641, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.248203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268438 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1086643, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2491207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268448 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086622, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.243127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268456 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1086643, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2491207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268463 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1086688, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.262218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268475 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:04.268486 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1086693, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.262218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268494 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1086693, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.262218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268501 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1086637, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2477362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.268508 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1086688, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.262218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268520 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:07:04.268532 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1086641, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.248203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268541 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1086674, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.258817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268553 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1086641, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.248203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268575 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1086674, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.258817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268587 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086630, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2446024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268597 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086630, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2446024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268608 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1086688, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.262218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268626 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1086688, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.262218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268637 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:04.268647 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:04.268658 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1086624, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2438457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268675 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1086647, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2491271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.268692 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1086624, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2438457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268763 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1086643, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2491207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268775 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1086643, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2491207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268791 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1086641, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.248203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268803 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1086641, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.248203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268815 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1086688, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.262218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268834 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:07:04.268846 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1086639, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2479613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.268863 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1086688, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.262218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:07:04.268876 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:07:04.268887 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1086634, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2471364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.268899 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086678, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2595387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.268912 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086622, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.243127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.268919 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1086693, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.262218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.268927 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1086674, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.258817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.268939 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086630, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2446024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.268950 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1086624, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2438457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.268957 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1086643, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2491207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.268964 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1086641, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.248203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.268975 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1086688, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.262218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:07:04.268983 | orchestrator | 2026-02-28 01:07:04.268990 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-28 01:07:04.268997 | orchestrator | Saturday 28 February 2026 01:03:53 +0000 (0:00:39.071) 0:01:08.998 ***** 2026-02-28 01:07:04.269004 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 01:07:04.269011 | orchestrator | 2026-02-28 01:07:04.269018 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-28 01:07:04.269025 | orchestrator | Saturday 28 February 2026 01:03:54 +0000 (0:00:01.142) 0:01:10.140 ***** 2026-02-28 01:07:04.269036 | orchestrator | [WARNING]: Skipped 2026-02-28 01:07:04.269044 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:07:04.269051 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-28 01:07:04.269058 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:07:04.269065 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-28 01:07:04.269072 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:07:04.269078 | orchestrator | [WARNING]: Skipped 2026-02-28 01:07:04.269085 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:07:04.269092 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-28 01:07:04.269099 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:07:04.269105 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-28 01:07:04.269112 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 01:07:04.269119 | orchestrator | [WARNING]: Skipped 2026-02-28 01:07:04.269125 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:07:04.269132 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-28 01:07:04.269139 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:07:04.269146 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-28 01:07:04.269152 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-28 01:07:04.269159 | orchestrator | [WARNING]: Skipped 2026-02-28 01:07:04.269166 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:07:04.269173 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-28 01:07:04.269179 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:07:04.269186 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-28 01:07:04.269193 | orchestrator | [WARNING]: Skipped 2026-02-28 01:07:04.269200 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:07:04.269210 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-28 01:07:04.269217 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:07:04.269224 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-28 01:07:04.269231 | orchestrator | [WARNING]: Skipped 2026-02-28 01:07:04.269238 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:07:04.269244 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-28 01:07:04.269251 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:07:04.269258 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-28 01:07:04.269265 | orchestrator | [WARNING]: Skipped 2026-02-28 01:07:04.269272 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:07:04.269278 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-28 01:07:04.269285 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:07:04.269291 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-28 01:07:04.269297 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-28 01:07:04.269303 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 01:07:04.269309 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-28 01:07:04.269316 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-28 01:07:04.269322 | orchestrator | 2026-02-28 01:07:04.269328 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-28 01:07:04.269335 | orchestrator | Saturday 28 February 2026 01:04:00 +0000 (0:00:05.739) 0:01:15.880 ***** 2026-02-28 01:07:04.269341 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:07:04.269353 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:07:04.269359 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:04.269366 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:04.269372 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:07:04.269378 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:04.269384 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:07:04.269391 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:07:04.269397 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:07:04.269403 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:07:04.269409 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:07:04.269416 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:07:04.269422 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-28 01:07:04.269428 | orchestrator | 2026-02-28 01:07:04.269438 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-28 01:07:04.269445 | orchestrator | Saturday 28 February 2026 01:04:36 +0000 (0:00:36.396) 0:01:52.276 ***** 2026-02-28 01:07:04.269451 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:07:04.269457 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:04.269463 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:07:04.269470 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:04.269476 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:07:04.269482 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:04.269488 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:07:04.269495 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:07:04.269501 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:07:04.269507 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:07:04.269513 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:07:04.269519 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:07:04.269526 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-28 01:07:04.269532 | orchestrator | 2026-02-28 01:07:04.269538 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-28 01:07:04.269544 | orchestrator | Saturday 28 February 2026 01:04:42 +0000 (0:00:05.695) 0:01:57.972 ***** 2026-02-28 01:07:04.269551 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:07:04.269558 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:07:04.269564 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:04.269571 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:07:04.269577 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:04.269583 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:04.269589 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-28 01:07:04.269654 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:07:04.269671 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:07:04.269677 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:07:04.269683 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:07:04.269690 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:07:04.269719 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:07:04.269726 | orchestrator | 2026-02-28 01:07:04.269733 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-28 01:07:04.269739 | orchestrator | Saturday 28 February 2026 01:04:47 +0000 (0:00:04.922) 0:02:02.895 ***** 2026-02-28 01:07:04.269745 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 01:07:04.269751 | orchestrator | 2026-02-28 01:07:04.269758 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-28 01:07:04.269764 | orchestrator | Saturday 28 February 2026 01:04:48 +0000 (0:00:01.400) 0:02:04.295 ***** 2026-02-28 01:07:04.269770 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:07:04.269777 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:04.269783 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:04.269789 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:04.269795 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:07:04.269801 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:07:04.269807 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:07:04.269813 | orchestrator | 2026-02-28 01:07:04.269820 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-28 01:07:04.269826 | orchestrator | Saturday 28 February 2026 01:04:49 +0000 (0:00:00.862) 0:02:05.158 ***** 2026-02-28 01:07:04.269832 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:07:04.269839 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:04.269845 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:07:04.269851 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:07:04.269857 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:04.269863 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:04.269869 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:07:04.269875 | orchestrator | 2026-02-28 01:07:04.269882 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-28 01:07:04.269888 | orchestrator | Saturday 28 February 2026 01:04:53 +0000 (0:00:04.044) 0:02:09.202 ***** 2026-02-28 01:07:04.269907 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:07:04.269914 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:07:04.269920 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:07:04.269926 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:07:04.269936 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:04.269943 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:04.269949 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:07:04.269955 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:04.269961 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:07:04.269968 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:07:04.269974 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:07:04.269980 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:07:04.269986 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:07:04.269993 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:07:04.269999 | orchestrator | 2026-02-28 01:07:04.270005 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-28 01:07:04.270044 | orchestrator | Saturday 28 February 2026 01:04:57 +0000 (0:00:03.705) 0:02:12.907 ***** 2026-02-28 01:07:04.270052 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:07:04.270059 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:07:04.270065 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:04.270072 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:04.270078 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-28 01:07:04.270084 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:07:04.270090 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:04.270096 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:07:04.270103 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:07:04.270109 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:07:04.270119 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:07:04.270125 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:07:04.270132 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:07:04.270138 | orchestrator | 2026-02-28 01:07:04.270144 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-28 01:07:04.270155 | orchestrator | Saturday 28 February 2026 01:05:00 +0000 (0:00:03.083) 0:02:15.991 ***** 2026-02-28 01:07:04.270161 | orchestrator | [WARNING]: Skipped 2026-02-28 01:07:04.270168 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-28 01:07:04.270174 | orchestrator | due to this access issue: 2026-02-28 01:07:04.270180 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-28 01:07:04.270186 | orchestrator | not a directory 2026-02-28 01:07:04.270193 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 01:07:04.270199 | orchestrator | 2026-02-28 01:07:04.270205 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-28 01:07:04.270212 | orchestrator | Saturday 28 February 2026 01:05:02 +0000 (0:00:01.750) 0:02:17.741 ***** 2026-02-28 01:07:04.270218 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:07:04.270224 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:04.270230 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:04.270236 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:04.270242 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:07:04.270249 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:07:04.270255 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:07:04.270261 | orchestrator | 2026-02-28 01:07:04.270267 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-28 01:07:04.270273 | orchestrator | Saturday 28 February 2026 01:05:02 +0000 (0:00:00.899) 0:02:18.640 ***** 2026-02-28 01:07:04.270280 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:07:04.270286 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:04.270292 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:04.270298 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:04.270304 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:07:04.270310 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:07:04.270316 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:07:04.270323 | orchestrator | 2026-02-28 01:07:04.270329 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-28 01:07:04.270335 | orchestrator | Saturday 28 February 2026 01:05:04 +0000 (0:00:01.354) 0:02:19.995 ***** 2026-02-28 01:07:04.270342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.270359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.270367 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-28 01:07:04.270375 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.270386 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.270393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.270399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.270406 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.270423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.270430 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:07:04.270437 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.270443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.270455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.270461 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.270468 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.270479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.270490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.270497 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.270503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.270510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.270522 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-28 01:07:04.270535 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.270541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.270551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.270558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:07:04.270565 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.270575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.270581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.270588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:07:04.270599 | orchestrator | 2026-02-28 01:07:04.270605 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-28 01:07:04.270612 | orchestrator | Saturday 28 February 2026 01:05:09 +0000 (0:00:05.018) 0:02:25.013 ***** 2026-02-28 01:07:04.270618 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-28 01:07:04.270624 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:07:04.270631 | orchestrator | 2026-02-28 01:07:04.270637 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:07:04.270643 | orchestrator | Saturday 28 February 2026 01:05:12 +0000 (0:00:03.131) 0:02:28.144 ***** 2026-02-28 01:07:04.270649 | orchestrator | 2026-02-28 01:07:04.270656 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:07:04.270662 | orchestrator | Saturday 28 February 2026 01:05:12 +0000 (0:00:00.078) 0:02:28.223 ***** 2026-02-28 01:07:04.270668 | orchestrator | 2026-02-28 01:07:04.270675 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:07:04.270681 | orchestrator | Saturday 28 February 2026 01:05:12 +0000 (0:00:00.075) 0:02:28.298 ***** 2026-02-28 01:07:04.270687 | orchestrator | 2026-02-28 01:07:04.270694 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:07:04.270721 | orchestrator | Saturday 28 February 2026 01:05:12 +0000 (0:00:00.083) 0:02:28.382 ***** 2026-02-28 01:07:04.270727 | orchestrator | 2026-02-28 01:07:04.270733 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:07:04.270744 | orchestrator | Saturday 28 February 2026 01:05:12 +0000 (0:00:00.297) 0:02:28.679 ***** 2026-02-28 01:07:04.270750 | orchestrator | 2026-02-28 01:07:04.270756 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:07:04.270762 | orchestrator | Saturday 28 February 2026 01:05:13 +0000 (0:00:00.083) 0:02:28.762 ***** 2026-02-28 01:07:04.270769 | orchestrator | 2026-02-28 01:07:04.270775 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:07:04.270781 | orchestrator | Saturday 28 February 2026 01:05:13 +0000 (0:00:00.068) 0:02:28.831 ***** 2026-02-28 01:07:04.270787 | orchestrator | 2026-02-28 01:07:04.270794 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-28 01:07:04.270800 | orchestrator | Saturday 28 February 2026 01:05:13 +0000 (0:00:00.120) 0:02:28.952 ***** 2026-02-28 01:07:04.270806 | orchestrator | changed: [testbed-manager] 2026-02-28 01:07:04.270812 | orchestrator | 2026-02-28 01:07:04.270819 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-28 01:07:04.270825 | orchestrator | Saturday 28 February 2026 01:05:34 +0000 (0:00:21.291) 0:02:50.244 ***** 2026-02-28 01:07:04.270831 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:07:04.270838 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:04.270844 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:04.270850 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:04.270856 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:07:04.270863 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:07:04.270869 | orchestrator | changed: [testbed-manager] 2026-02-28 01:07:04.270875 | orchestrator | 2026-02-28 01:07:04.270881 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-28 01:07:04.270888 | orchestrator | Saturday 28 February 2026 01:05:53 +0000 (0:00:19.107) 0:03:09.351 ***** 2026-02-28 01:07:04.270894 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:04.270900 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:04.270911 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:04.270917 | orchestrator | 2026-02-28 01:07:04.270923 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-28 01:07:04.270930 | orchestrator | Saturday 28 February 2026 01:06:06 +0000 (0:00:12.737) 0:03:22.088 ***** 2026-02-28 01:07:04.270936 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:04.270942 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:04.270948 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:04.270955 | orchestrator | 2026-02-28 01:07:04.270961 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-28 01:07:04.270967 | orchestrator | Saturday 28 February 2026 01:06:18 +0000 (0:00:12.148) 0:03:34.237 ***** 2026-02-28 01:07:04.270973 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:07:04.270980 | orchestrator | changed: [testbed-manager] 2026-02-28 01:07:04.270986 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:04.270992 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:07:04.270999 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:04.271005 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:07:04.271014 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:04.271020 | orchestrator | 2026-02-28 01:07:04.271027 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-28 01:07:04.271033 | orchestrator | Saturday 28 February 2026 01:06:30 +0000 (0:00:11.947) 0:03:46.185 ***** 2026-02-28 01:07:04.271039 | orchestrator | changed: [testbed-manager] 2026-02-28 01:07:04.271046 | orchestrator | 2026-02-28 01:07:04.271052 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-28 01:07:04.271058 | orchestrator | Saturday 28 February 2026 01:06:39 +0000 (0:00:09.080) 0:03:55.265 ***** 2026-02-28 01:07:04.271064 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:04.271071 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:04.271077 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:04.271083 | orchestrator | 2026-02-28 01:07:04.271089 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-28 01:07:04.271096 | orchestrator | Saturday 28 February 2026 01:06:50 +0000 (0:00:10.742) 0:04:06.008 ***** 2026-02-28 01:07:04.271102 | orchestrator | changed: [testbed-manager] 2026-02-28 01:07:04.271108 | orchestrator | 2026-02-28 01:07:04.271114 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-28 01:07:04.271121 | orchestrator | Saturday 28 February 2026 01:06:55 +0000 (0:00:05.202) 0:04:11.211 ***** 2026-02-28 01:07:04.271127 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:07:04.271133 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:07:04.271139 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:07:04.271145 | orchestrator | 2026-02-28 01:07:04.271152 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:07:04.271158 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-28 01:07:04.271165 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-28 01:07:04.271171 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-28 01:07:04.271178 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-28 01:07:04.271184 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:07:04.271191 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:07:04.271200 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:07:04.271211 | orchestrator | 2026-02-28 01:07:04.271217 | orchestrator | 2026-02-28 01:07:04.271223 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:07:04.271230 | orchestrator | Saturday 28 February 2026 01:07:01 +0000 (0:00:06.299) 0:04:17.511 ***** 2026-02-28 01:07:04.271236 | orchestrator | =============================================================================== 2026-02-28 01:07:04.271242 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 39.07s 2026-02-28 01:07:04.271248 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 36.40s 2026-02-28 01:07:04.271255 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.29s 2026-02-28 01:07:04.271261 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 19.11s 2026-02-28 01:07:04.271267 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.74s 2026-02-28 01:07:04.271273 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.15s 2026-02-28 01:07:04.271280 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 11.95s 2026-02-28 01:07:04.271286 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.74s 2026-02-28 01:07:04.271292 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.08s 2026-02-28 01:07:04.271298 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.91s 2026-02-28 01:07:04.271304 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.35s 2026-02-28 01:07:04.271311 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.30s 2026-02-28 01:07:04.271317 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 5.74s 2026-02-28 01:07:04.271323 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.70s 2026-02-28 01:07:04.271329 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.20s 2026-02-28 01:07:04.271335 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.02s 2026-02-28 01:07:04.271341 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 4.92s 2026-02-28 01:07:04.271348 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.06s 2026-02-28 01:07:04.271354 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 4.04s 2026-02-28 01:07:04.271360 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.71s 2026-02-28 01:07:04.271370 | orchestrator | 2026-02-28 01:07:04 | INFO  | Task 206bc227-ce81-4b59-9959-51425d794814 is in state STARTED 2026-02-28 01:07:04.271376 | orchestrator | 2026-02-28 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:07.302505 | orchestrator | 2026-02-28 01:07:07 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:07:07.304217 | orchestrator | 2026-02-28 01:07:07 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:07.307228 | orchestrator | 2026-02-28 01:07:07 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:07.308144 | orchestrator | 2026-02-28 01:07:07 | INFO  | Task 206bc227-ce81-4b59-9959-51425d794814 is in state STARTED 2026-02-28 01:07:07.308362 | orchestrator | 2026-02-28 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:10.345910 | orchestrator | 2026-02-28 01:07:10 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:07:10.347485 | orchestrator | 2026-02-28 01:07:10 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:10.349971 | orchestrator | 2026-02-28 01:07:10 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:10.351338 | orchestrator | 2026-02-28 01:07:10 | INFO  | Task 206bc227-ce81-4b59-9959-51425d794814 is in state STARTED 2026-02-28 01:07:10.351463 | orchestrator | 2026-02-28 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:13.395793 | orchestrator | 2026-02-28 01:07:13 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:07:13.397761 | orchestrator | 2026-02-28 01:07:13 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:13.398751 | orchestrator | 2026-02-28 01:07:13 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:13.399681 | orchestrator | 2026-02-28 01:07:13 | INFO  | Task 206bc227-ce81-4b59-9959-51425d794814 is in state STARTED 2026-02-28 01:07:13.399891 | orchestrator | 2026-02-28 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:16.465803 | orchestrator | 2026-02-28 01:07:16 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:07:16.467540 | orchestrator | 2026-02-28 01:07:16 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:16.468010 | orchestrator | 2026-02-28 01:07:16 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:16.468934 | orchestrator | 2026-02-28 01:07:16 | INFO  | Task 206bc227-ce81-4b59-9959-51425d794814 is in state STARTED 2026-02-28 01:07:16.468971 | orchestrator | 2026-02-28 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:19.508143 | orchestrator | 2026-02-28 01:07:19 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:07:19.512426 | orchestrator | 2026-02-28 01:07:19 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:19.514524 | orchestrator | 2026-02-28 01:07:19 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:19.516376 | orchestrator | 2026-02-28 01:07:19 | INFO  | Task 206bc227-ce81-4b59-9959-51425d794814 is in state STARTED 2026-02-28 01:07:19.516518 | orchestrator | 2026-02-28 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:22.565367 | orchestrator | 2026-02-28 01:07:22 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:07:22.567004 | orchestrator | 2026-02-28 01:07:22 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:22.568394 | orchestrator | 2026-02-28 01:07:22 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:22.571000 | orchestrator | 2026-02-28 01:07:22 | INFO  | Task 206bc227-ce81-4b59-9959-51425d794814 is in state STARTED 2026-02-28 01:07:22.571118 | orchestrator | 2026-02-28 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:25.619790 | orchestrator | 2026-02-28 01:07:25 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:07:25.620731 | orchestrator | 2026-02-28 01:07:25 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:25.622314 | orchestrator | 2026-02-28 01:07:25 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:25.623796 | orchestrator | 2026-02-28 01:07:25 | INFO  | Task 206bc227-ce81-4b59-9959-51425d794814 is in state STARTED 2026-02-28 01:07:25.623898 | orchestrator | 2026-02-28 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:28.670162 | orchestrator | 2026-02-28 01:07:28 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:07:28.670862 | orchestrator | 2026-02-28 01:07:28 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:28.672846 | orchestrator | 2026-02-28 01:07:28 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:28.674626 | orchestrator | 2026-02-28 01:07:28 | INFO  | Task 206bc227-ce81-4b59-9959-51425d794814 is in state STARTED 2026-02-28 01:07:28.674674 | orchestrator | 2026-02-28 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:31.707233 | orchestrator | 2026-02-28 01:07:31 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:07:31.708444 | orchestrator | 2026-02-28 01:07:31 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:31.709012 | orchestrator | 2026-02-28 01:07:31 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:31.709950 | orchestrator | 2026-02-28 01:07:31 | INFO  | Task 206bc227-ce81-4b59-9959-51425d794814 is in state STARTED 2026-02-28 01:07:31.710077 | orchestrator | 2026-02-28 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:34.753122 | orchestrator | 2026-02-28 01:07:34 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:07:34.754798 | orchestrator | 2026-02-28 01:07:34 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:34.757175 | orchestrator | 2026-02-28 01:07:34 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:34.757768 | orchestrator | 2026-02-28 01:07:34 | INFO  | Task 206bc227-ce81-4b59-9959-51425d794814 is in state STARTED 2026-02-28 01:07:34.757922 | orchestrator | 2026-02-28 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:37.797286 | orchestrator | 2026-02-28 01:07:37 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:07:37.798689 | orchestrator | 2026-02-28 01:07:37 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:37.798796 | orchestrator | 2026-02-28 01:07:37 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:37.799487 | orchestrator | 2026-02-28 01:07:37 | INFO  | Task 206bc227-ce81-4b59-9959-51425d794814 is in state STARTED 2026-02-28 01:07:37.799523 | orchestrator | 2026-02-28 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:40.844268 | orchestrator | 2026-02-28 01:07:40 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state STARTED 2026-02-28 01:07:40.845148 | orchestrator | 2026-02-28 01:07:40 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:40.846053 | orchestrator | 2026-02-28 01:07:40 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:40.848520 | orchestrator | 2026-02-28 01:07:40 | INFO  | Task 206bc227-ce81-4b59-9959-51425d794814 is in state STARTED 2026-02-28 01:07:40.848568 | orchestrator | 2026-02-28 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:43.883815 | orchestrator | 2026-02-28 01:07:43.883962 | orchestrator | 2026-02-28 01:07:43 | INFO  | Task f3a16511-315c-484a-a35a-89d3f02bb87d is in state SUCCESS 2026-02-28 01:07:43.885099 | orchestrator | 2026-02-28 01:07:43.885140 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:07:43.885149 | orchestrator | 2026-02-28 01:07:43.885156 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:07:43.885164 | orchestrator | Saturday 28 February 2026 01:06:20 +0000 (0:00:00.820) 0:00:00.820 ***** 2026-02-28 01:07:43.885171 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:07:43.885200 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:07:43.885207 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:07:43.885214 | orchestrator | 2026-02-28 01:07:43.885223 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:07:43.885235 | orchestrator | Saturday 28 February 2026 01:06:21 +0000 (0:00:01.208) 0:00:02.029 ***** 2026-02-28 01:07:43.885248 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-28 01:07:43.885255 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-28 01:07:43.885262 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-28 01:07:43.885269 | orchestrator | 2026-02-28 01:07:43.885275 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-28 01:07:43.885282 | orchestrator | 2026-02-28 01:07:43.885288 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-28 01:07:43.885295 | orchestrator | Saturday 28 February 2026 01:06:23 +0000 (0:00:01.711) 0:00:03.740 ***** 2026-02-28 01:07:43.885302 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:07:43.885310 | orchestrator | 2026-02-28 01:07:43.885317 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-28 01:07:43.885324 | orchestrator | Saturday 28 February 2026 01:06:24 +0000 (0:00:01.494) 0:00:05.234 ***** 2026-02-28 01:07:43.885331 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-28 01:07:43.885337 | orchestrator | 2026-02-28 01:07:43.885344 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-28 01:07:43.885351 | orchestrator | Saturday 28 February 2026 01:06:29 +0000 (0:00:04.514) 0:00:09.748 ***** 2026-02-28 01:07:43.885357 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-28 01:07:43.885364 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-28 01:07:43.885371 | orchestrator | 2026-02-28 01:07:43.885378 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-28 01:07:43.885385 | orchestrator | Saturday 28 February 2026 01:06:36 +0000 (0:00:07.813) 0:00:17.562 ***** 2026-02-28 01:07:43.885391 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:07:43.885398 | orchestrator | 2026-02-28 01:07:43.885405 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-28 01:07:43.885412 | orchestrator | Saturday 28 February 2026 01:06:40 +0000 (0:00:03.889) 0:00:21.451 ***** 2026-02-28 01:07:43.885418 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:07:43.885425 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-28 01:07:43.885432 | orchestrator | 2026-02-28 01:07:43.885438 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-28 01:07:43.885445 | orchestrator | Saturday 28 February 2026 01:06:45 +0000 (0:00:04.587) 0:00:26.039 ***** 2026-02-28 01:07:43.885451 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:07:43.885458 | orchestrator | 2026-02-28 01:07:43.885465 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-28 01:07:43.885472 | orchestrator | Saturday 28 February 2026 01:06:49 +0000 (0:00:03.739) 0:00:29.778 ***** 2026-02-28 01:07:43.885478 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-28 01:07:43.885485 | orchestrator | 2026-02-28 01:07:43.885491 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-28 01:07:43.885498 | orchestrator | Saturday 28 February 2026 01:06:53 +0000 (0:00:04.125) 0:00:33.904 ***** 2026-02-28 01:07:43.885504 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:43.885511 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:43.885518 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:43.885524 | orchestrator | 2026-02-28 01:07:43.885531 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-28 01:07:43.885544 | orchestrator | Saturday 28 February 2026 01:06:53 +0000 (0:00:00.422) 0:00:34.327 ***** 2026-02-28 01:07:43.885566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:07:43.885590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:07:43.885598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:07:43.885607 | orchestrator | 2026-02-28 01:07:43.885620 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-28 01:07:43.885631 | orchestrator | Saturday 28 February 2026 01:06:54 +0000 (0:00:01.133) 0:00:35.460 ***** 2026-02-28 01:07:43.885638 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:43.885645 | orchestrator | 2026-02-28 01:07:43.885652 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-28 01:07:43.885658 | orchestrator | Saturday 28 February 2026 01:06:54 +0000 (0:00:00.148) 0:00:35.609 ***** 2026-02-28 01:07:43.885665 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:43.885672 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:43.885678 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:43.885685 | orchestrator | 2026-02-28 01:07:43.885726 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-28 01:07:43.885736 | orchestrator | Saturday 28 February 2026 01:06:55 +0000 (0:00:00.784) 0:00:36.393 ***** 2026-02-28 01:07:43.885744 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:07:43.885758 | orchestrator | 2026-02-28 01:07:43.885770 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-28 01:07:43.885782 | orchestrator | Saturday 28 February 2026 01:06:56 +0000 (0:00:00.807) 0:00:37.200 ***** 2026-02-28 01:07:43.885797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:07:43.885814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:07:43.885823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:07:43.885832 | orchestrator | 2026-02-28 01:07:43.885840 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-28 01:07:43.885847 | orchestrator | Saturday 28 February 2026 01:06:58 +0000 (0:00:01.996) 0:00:39.196 ***** 2026-02-28 01:07:43.885856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:07:43.885872 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:43.885885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:07:43.885893 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:43.885906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:07:43.885915 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:43.885923 | orchestrator | 2026-02-28 01:07:43.885972 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-28 01:07:43.885981 | orchestrator | Saturday 28 February 2026 01:06:59 +0000 (0:00:01.230) 0:00:40.427 ***** 2026-02-28 01:07:43.885990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:07:43.885998 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:43.886006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:07:43.886062 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:43.886076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:07:43.886085 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:43.886092 | orchestrator | 2026-02-28 01:07:43.886099 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-28 01:07:43.886106 | orchestrator | Saturday 28 February 2026 01:07:00 +0000 (0:00:00.851) 0:00:41.279 ***** 2026-02-28 01:07:43.886124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:07:43.886137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:07:43.886148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:07:43.886165 | orchestrator | 2026-02-28 01:07:43.886172 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-28 01:07:43.886179 | orchestrator | Saturday 28 February 2026 01:07:02 +0000 (0:00:01.510) 0:00:42.790 ***** 2026-02-28 01:07:43.886190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:07:43.886198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:07:43.886213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:07:43.886224 | orchestrator | 2026-02-28 01:07:43.886235 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-28 01:07:43.886245 | orchestrator | Saturday 28 February 2026 01:07:04 +0000 (0:00:02.813) 0:00:45.604 ***** 2026-02-28 01:07:43.886256 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-28 01:07:43.886268 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-28 01:07:43.886286 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-28 01:07:43.886296 | orchestrator | 2026-02-28 01:07:43.886307 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-28 01:07:43.886318 | orchestrator | Saturday 28 February 2026 01:07:06 +0000 (0:00:01.499) 0:00:47.103 ***** 2026-02-28 01:07:43.886327 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:43.886338 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:43.886349 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:43.886360 | orchestrator | 2026-02-28 01:07:43.886370 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-28 01:07:43.886380 | orchestrator | Saturday 28 February 2026 01:07:07 +0000 (0:00:01.445) 0:00:48.548 ***** 2026-02-28 01:07:43.886391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:07:43.886402 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:43.886449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:07:43.886463 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:43.886484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:07:43.886500 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:43.886518 | orchestrator | 2026-02-28 01:07:43.886528 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-28 01:07:43.886538 | orchestrator | Saturday 28 February 2026 01:07:08 +0000 (0:00:00.629) 0:00:49.178 ***** 2026-02-28 01:07:43.886563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:07:43.886574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:07:43.886591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:07:43.886601 | orchestrator | 2026-02-28 01:07:43.886611 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-28 01:07:43.886620 | orchestrator | Saturday 28 February 2026 01:07:09 +0000 (0:00:01.158) 0:00:50.336 ***** 2026-02-28 01:07:43.886630 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:43.886641 | orchestrator | 2026-02-28 01:07:43.886651 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-28 01:07:43.886660 | orchestrator | Saturday 28 February 2026 01:07:12 +0000 (0:00:02.819) 0:00:53.155 ***** 2026-02-28 01:07:43.886671 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:43.886681 | orchestrator | 2026-02-28 01:07:43.886716 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-28 01:07:43.886729 | orchestrator | Saturday 28 February 2026 01:07:14 +0000 (0:00:02.367) 0:00:55.523 ***** 2026-02-28 01:07:43.886747 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:43.886758 | orchestrator | 2026-02-28 01:07:43.886765 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-28 01:07:43.886772 | orchestrator | Saturday 28 February 2026 01:07:29 +0000 (0:00:15.081) 0:01:10.604 ***** 2026-02-28 01:07:43.886778 | orchestrator | 2026-02-28 01:07:43.886792 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-28 01:07:43.886800 | orchestrator | Saturday 28 February 2026 01:07:29 +0000 (0:00:00.084) 0:01:10.688 ***** 2026-02-28 01:07:43.886806 | orchestrator | 2026-02-28 01:07:43.886813 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-28 01:07:43.886820 | orchestrator | Saturday 28 February 2026 01:07:30 +0000 (0:00:00.061) 0:01:10.750 ***** 2026-02-28 01:07:43.886826 | orchestrator | 2026-02-28 01:07:43.886835 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-28 01:07:43.886846 | orchestrator | Saturday 28 February 2026 01:07:30 +0000 (0:00:00.064) 0:01:10.814 ***** 2026-02-28 01:07:43.886856 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:43.886923 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:43.886938 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:43.886949 | orchestrator | 2026-02-28 01:07:43.886961 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:07:43.886974 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:07:43.886986 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 01:07:43.886999 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 01:07:43.887006 | orchestrator | 2026-02-28 01:07:43.887013 | orchestrator | 2026-02-28 01:07:43.887019 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:07:43.887026 | orchestrator | Saturday 28 February 2026 01:07:40 +0000 (0:00:10.367) 0:01:21.182 ***** 2026-02-28 01:07:43.887033 | orchestrator | =============================================================================== 2026-02-28 01:07:43.887040 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.08s 2026-02-28 01:07:43.887047 | orchestrator | placement : Restart placement-api container ---------------------------- 10.37s 2026-02-28 01:07:43.887054 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.81s 2026-02-28 01:07:43.887060 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.59s 2026-02-28 01:07:43.887067 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.52s 2026-02-28 01:07:43.887074 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.13s 2026-02-28 01:07:43.887081 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.89s 2026-02-28 01:07:43.887088 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.74s 2026-02-28 01:07:43.887094 | orchestrator | placement : Creating placement databases -------------------------------- 2.82s 2026-02-28 01:07:43.887101 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.81s 2026-02-28 01:07:43.887108 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.37s 2026-02-28 01:07:43.887115 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.00s 2026-02-28 01:07:43.887121 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.71s 2026-02-28 01:07:43.887128 | orchestrator | placement : Copying over config.json files for services ----------------- 1.51s 2026-02-28 01:07:43.887135 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.50s 2026-02-28 01:07:43.887141 | orchestrator | placement : include_tasks ----------------------------------------------- 1.50s 2026-02-28 01:07:43.887148 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.45s 2026-02-28 01:07:43.887155 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.23s 2026-02-28 01:07:43.887162 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.20s 2026-02-28 01:07:43.887182 | orchestrator | placement : Check placement containers ---------------------------------- 1.16s 2026-02-28 01:07:43.887190 | orchestrator | 2026-02-28 01:07:43 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:43.887197 | orchestrator | 2026-02-28 01:07:43 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:43.887295 | orchestrator | 2026-02-28 01:07:43 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:07:43.888149 | orchestrator | 2026-02-28 01:07:43 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:07:43.889049 | orchestrator | 2026-02-28 01:07:43 | INFO  | Task 206bc227-ce81-4b59-9959-51425d794814 is in state SUCCESS 2026-02-28 01:07:43.889253 | orchestrator | 2026-02-28 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:46.935268 | orchestrator | 2026-02-28 01:07:46 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:46.935354 | orchestrator | 2026-02-28 01:07:46 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:46.935368 | orchestrator | 2026-02-28 01:07:46 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:07:46.935380 | orchestrator | 2026-02-28 01:07:46 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:07:46.935391 | orchestrator | 2026-02-28 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:49.963914 | orchestrator | 2026-02-28 01:07:49 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:49.964875 | orchestrator | 2026-02-28 01:07:49 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:49.965887 | orchestrator | 2026-02-28 01:07:49 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:07:49.968516 | orchestrator | 2026-02-28 01:07:49 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:07:49.968752 | orchestrator | 2026-02-28 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:53.072909 | orchestrator | 2026-02-28 01:07:53 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:53.073005 | orchestrator | 2026-02-28 01:07:53 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:53.073026 | orchestrator | 2026-02-28 01:07:53 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:07:53.073040 | orchestrator | 2026-02-28 01:07:53 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:07:53.073055 | orchestrator | 2026-02-28 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:56.100554 | orchestrator | 2026-02-28 01:07:56 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:56.102217 | orchestrator | 2026-02-28 01:07:56 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:56.105274 | orchestrator | 2026-02-28 01:07:56 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:07:56.105346 | orchestrator | 2026-02-28 01:07:56 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:07:56.105366 | orchestrator | 2026-02-28 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:59.149751 | orchestrator | 2026-02-28 01:07:59 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:07:59.151006 | orchestrator | 2026-02-28 01:07:59 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:07:59.152597 | orchestrator | 2026-02-28 01:07:59 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:07:59.154465 | orchestrator | 2026-02-28 01:07:59 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:07:59.154500 | orchestrator | 2026-02-28 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:02.212862 | orchestrator | 2026-02-28 01:08:02 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:08:02.214466 | orchestrator | 2026-02-28 01:08:02 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:08:02.216997 | orchestrator | 2026-02-28 01:08:02 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:02.218380 | orchestrator | 2026-02-28 01:08:02 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:02.218418 | orchestrator | 2026-02-28 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:05.265267 | orchestrator | 2026-02-28 01:08:05 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:08:05.268676 | orchestrator | 2026-02-28 01:08:05 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:08:05.271753 | orchestrator | 2026-02-28 01:08:05 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:05.273898 | orchestrator | 2026-02-28 01:08:05 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:05.275001 | orchestrator | 2026-02-28 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:08.315033 | orchestrator | 2026-02-28 01:08:08 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:08:08.317287 | orchestrator | 2026-02-28 01:08:08 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:08:08.318968 | orchestrator | 2026-02-28 01:08:08 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:08.320566 | orchestrator | 2026-02-28 01:08:08 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:08.320609 | orchestrator | 2026-02-28 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:11.366156 | orchestrator | 2026-02-28 01:08:11 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:08:11.369245 | orchestrator | 2026-02-28 01:08:11 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:08:11.371114 | orchestrator | 2026-02-28 01:08:11 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:11.373675 | orchestrator | 2026-02-28 01:08:11 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:11.374005 | orchestrator | 2026-02-28 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:14.419641 | orchestrator | 2026-02-28 01:08:14 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:08:14.420354 | orchestrator | 2026-02-28 01:08:14 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:08:14.422629 | orchestrator | 2026-02-28 01:08:14 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:14.425640 | orchestrator | 2026-02-28 01:08:14 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:14.426197 | orchestrator | 2026-02-28 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:17.475390 | orchestrator | 2026-02-28 01:08:17 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:08:17.480342 | orchestrator | 2026-02-28 01:08:17 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:08:17.482889 | orchestrator | 2026-02-28 01:08:17 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:17.485401 | orchestrator | 2026-02-28 01:08:17 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:17.485481 | orchestrator | 2026-02-28 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:20.525822 | orchestrator | 2026-02-28 01:08:20 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:08:20.525921 | orchestrator | 2026-02-28 01:08:20 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:08:20.525939 | orchestrator | 2026-02-28 01:08:20 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:20.525955 | orchestrator | 2026-02-28 01:08:20 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:20.525971 | orchestrator | 2026-02-28 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:23.552560 | orchestrator | 2026-02-28 01:08:23 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:08:23.553846 | orchestrator | 2026-02-28 01:08:23 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:08:23.554908 | orchestrator | 2026-02-28 01:08:23 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:23.556106 | orchestrator | 2026-02-28 01:08:23 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:23.556286 | orchestrator | 2026-02-28 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:26.593800 | orchestrator | 2026-02-28 01:08:26 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:08:26.593908 | orchestrator | 2026-02-28 01:08:26 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:08:26.594465 | orchestrator | 2026-02-28 01:08:26 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:26.595055 | orchestrator | 2026-02-28 01:08:26 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:26.595082 | orchestrator | 2026-02-28 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:29.632481 | orchestrator | 2026-02-28 01:08:29 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:08:29.633763 | orchestrator | 2026-02-28 01:08:29 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:08:29.635237 | orchestrator | 2026-02-28 01:08:29 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:29.636845 | orchestrator | 2026-02-28 01:08:29 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:29.636924 | orchestrator | 2026-02-28 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:32.674864 | orchestrator | 2026-02-28 01:08:32 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:08:32.674965 | orchestrator | 2026-02-28 01:08:32 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:08:32.675371 | orchestrator | 2026-02-28 01:08:32 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:32.676194 | orchestrator | 2026-02-28 01:08:32 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:32.676469 | orchestrator | 2026-02-28 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:35.701682 | orchestrator | 2026-02-28 01:08:35 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:08:35.702166 | orchestrator | 2026-02-28 01:08:35 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state STARTED 2026-02-28 01:08:35.702743 | orchestrator | 2026-02-28 01:08:35 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:35.703406 | orchestrator | 2026-02-28 01:08:35 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:35.703585 | orchestrator | 2026-02-28 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:38.726813 | orchestrator | 2026-02-28 01:08:38 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:08:38.728934 | orchestrator | 2026-02-28 01:08:38 | INFO  | Task b7663edc-c115-4a19-8223-eaf0e3b5cd56 is in state SUCCESS 2026-02-28 01:08:38.730435 | orchestrator | 2026-02-28 01:08:38.730481 | orchestrator | 2026-02-28 01:08:38.730489 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:08:38.730497 | orchestrator | 2026-02-28 01:08:38.730504 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:08:38.730511 | orchestrator | Saturday 28 February 2026 01:07:07 +0000 (0:00:00.421) 0:00:00.421 ***** 2026-02-28 01:08:38.730518 | orchestrator | ok: [testbed-manager] 2026-02-28 01:08:38.730525 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:08:38.730531 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:08:38.730537 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:08:38.730543 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:08:38.730549 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:08:38.730555 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:08:38.730562 | orchestrator | 2026-02-28 01:08:38.730568 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:08:38.730574 | orchestrator | Saturday 28 February 2026 01:07:08 +0000 (0:00:01.024) 0:00:01.445 ***** 2026-02-28 01:08:38.730581 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-28 01:08:38.730587 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-28 01:08:38.730593 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-28 01:08:38.730600 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-28 01:08:38.730606 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-28 01:08:38.730612 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-28 01:08:38.730618 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-28 01:08:38.730624 | orchestrator | 2026-02-28 01:08:38.730630 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-28 01:08:38.730636 | orchestrator | 2026-02-28 01:08:38.730642 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-28 01:08:38.730649 | orchestrator | Saturday 28 February 2026 01:07:09 +0000 (0:00:00.859) 0:00:02.305 ***** 2026-02-28 01:08:38.730666 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:08:38.730673 | orchestrator | 2026-02-28 01:08:38.730679 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-28 01:08:38.730704 | orchestrator | Saturday 28 February 2026 01:07:11 +0000 (0:00:01.841) 0:00:04.147 ***** 2026-02-28 01:08:38.730717 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-28 01:08:38.730725 | orchestrator | 2026-02-28 01:08:38.730757 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-28 01:08:38.730771 | orchestrator | Saturday 28 February 2026 01:07:15 +0000 (0:00:04.130) 0:00:08.278 ***** 2026-02-28 01:08:38.730799 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-28 01:08:38.730811 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-28 01:08:38.730818 | orchestrator | 2026-02-28 01:08:38.730825 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-28 01:08:38.730831 | orchestrator | Saturday 28 February 2026 01:07:22 +0000 (0:00:06.782) 0:00:15.060 ***** 2026-02-28 01:08:38.730837 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-28 01:08:38.730843 | orchestrator | 2026-02-28 01:08:38.730850 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-28 01:08:38.730856 | orchestrator | Saturday 28 February 2026 01:07:25 +0000 (0:00:03.406) 0:00:18.466 ***** 2026-02-28 01:08:38.730862 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:08:38.730868 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-28 01:08:38.730874 | orchestrator | 2026-02-28 01:08:38.730880 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-28 01:08:38.730887 | orchestrator | Saturday 28 February 2026 01:07:30 +0000 (0:00:04.142) 0:00:22.609 ***** 2026-02-28 01:08:38.730893 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-28 01:08:38.730899 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-28 01:08:38.730905 | orchestrator | 2026-02-28 01:08:38.730912 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-28 01:08:38.730918 | orchestrator | Saturday 28 February 2026 01:07:36 +0000 (0:00:06.356) 0:00:28.965 ***** 2026-02-28 01:08:38.730924 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-28 01:08:38.730930 | orchestrator | 2026-02-28 01:08:38.730937 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:08:38.730943 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:08:38.730949 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:08:38.730956 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:08:38.730962 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:08:38.730968 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:08:38.730986 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:08:38.730993 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:08:38.730999 | orchestrator | 2026-02-28 01:08:38.731005 | orchestrator | 2026-02-28 01:08:38.731011 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:08:38.731018 | orchestrator | Saturday 28 February 2026 01:07:41 +0000 (0:00:05.365) 0:00:34.331 ***** 2026-02-28 01:08:38.731024 | orchestrator | =============================================================================== 2026-02-28 01:08:38.731032 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.78s 2026-02-28 01:08:38.731039 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.36s 2026-02-28 01:08:38.731046 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.37s 2026-02-28 01:08:38.731053 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.14s 2026-02-28 01:08:38.731065 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.13s 2026-02-28 01:08:38.731073 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.41s 2026-02-28 01:08:38.731080 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.84s 2026-02-28 01:08:38.731088 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.02s 2026-02-28 01:08:38.731095 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.86s 2026-02-28 01:08:38.731104 | orchestrator | 2026-02-28 01:08:38.731371 | orchestrator | 2026-02-28 01:08:38.731389 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:08:38.731399 | orchestrator | 2026-02-28 01:08:38.731411 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:08:38.731421 | orchestrator | Saturday 28 February 2026 01:02:46 +0000 (0:00:00.371) 0:00:00.371 ***** 2026-02-28 01:08:38.731433 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:08:38.731444 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:08:38.731456 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:08:38.731474 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:08:38.731484 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:08:38.731490 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:08:38.731497 | orchestrator | 2026-02-28 01:08:38.731503 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:08:38.731509 | orchestrator | Saturday 28 February 2026 01:02:47 +0000 (0:00:01.112) 0:00:01.483 ***** 2026-02-28 01:08:38.731515 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-28 01:08:38.731522 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-28 01:08:38.731528 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-28 01:08:38.731534 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-28 01:08:38.731540 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-28 01:08:38.731547 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-28 01:08:38.731553 | orchestrator | 2026-02-28 01:08:38.731559 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-28 01:08:38.731565 | orchestrator | 2026-02-28 01:08:38.731571 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-28 01:08:38.731577 | orchestrator | Saturday 28 February 2026 01:02:48 +0000 (0:00:00.903) 0:00:02.387 ***** 2026-02-28 01:08:38.731584 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:08:38.731590 | orchestrator | 2026-02-28 01:08:38.731596 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-28 01:08:38.731602 | orchestrator | Saturday 28 February 2026 01:02:51 +0000 (0:00:02.381) 0:00:04.768 ***** 2026-02-28 01:08:38.731608 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:08:38.731615 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:08:38.731621 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:08:38.731627 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:08:38.731633 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:08:38.731639 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:08:38.731645 | orchestrator | 2026-02-28 01:08:38.731652 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-28 01:08:38.731658 | orchestrator | Saturday 28 February 2026 01:02:53 +0000 (0:00:02.102) 0:00:06.871 ***** 2026-02-28 01:08:38.731664 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:08:38.731670 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:08:38.731677 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:08:38.731683 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:08:38.731728 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:08:38.731739 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:08:38.731756 | orchestrator | 2026-02-28 01:08:38.731767 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-28 01:08:38.731787 | orchestrator | Saturday 28 February 2026 01:02:54 +0000 (0:00:01.290) 0:00:08.161 ***** 2026-02-28 01:08:38.731799 | orchestrator | ok: [testbed-node-0] => { 2026-02-28 01:08:38.731809 | orchestrator |  "changed": false, 2026-02-28 01:08:38.731820 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:08:38.731832 | orchestrator | } 2026-02-28 01:08:38.731843 | orchestrator | ok: [testbed-node-1] => { 2026-02-28 01:08:38.731853 | orchestrator |  "changed": false, 2026-02-28 01:08:38.731859 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:08:38.731865 | orchestrator | } 2026-02-28 01:08:38.731872 | orchestrator | ok: [testbed-node-2] => { 2026-02-28 01:08:38.731878 | orchestrator |  "changed": false, 2026-02-28 01:08:38.731884 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:08:38.731890 | orchestrator | } 2026-02-28 01:08:38.731896 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 01:08:38.731902 | orchestrator |  "changed": false, 2026-02-28 01:08:38.731908 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:08:38.731915 | orchestrator | } 2026-02-28 01:08:38.731921 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 01:08:38.731927 | orchestrator |  "changed": false, 2026-02-28 01:08:38.731933 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:08:38.731939 | orchestrator | } 2026-02-28 01:08:38.731947 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 01:08:38.731967 | orchestrator |  "changed": false, 2026-02-28 01:08:38.731977 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:08:38.731988 | orchestrator | } 2026-02-28 01:08:38.731997 | orchestrator | 2026-02-28 01:08:38.732003 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-28 01:08:38.732009 | orchestrator | Saturday 28 February 2026 01:02:55 +0000 (0:00:01.065) 0:00:09.226 ***** 2026-02-28 01:08:38.732015 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.732022 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.732028 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.732034 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.732040 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.732046 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.732053 | orchestrator | 2026-02-28 01:08:38.732059 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-28 01:08:38.732065 | orchestrator | Saturday 28 February 2026 01:02:56 +0000 (0:00:01.251) 0:00:10.478 ***** 2026-02-28 01:08:38.732071 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-28 01:08:38.732077 | orchestrator | 2026-02-28 01:08:38.732084 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-28 01:08:38.732090 | orchestrator | Saturday 28 February 2026 01:03:01 +0000 (0:00:04.195) 0:00:14.674 ***** 2026-02-28 01:08:38.732096 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-28 01:08:38.732103 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-28 01:08:38.732109 | orchestrator | 2026-02-28 01:08:38.732116 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-28 01:08:38.732122 | orchestrator | Saturday 28 February 2026 01:03:09 +0000 (0:00:08.558) 0:00:23.232 ***** 2026-02-28 01:08:38.732128 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:08:38.732135 | orchestrator | 2026-02-28 01:08:38.732143 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-28 01:08:38.732156 | orchestrator | Saturday 28 February 2026 01:03:13 +0000 (0:00:03.477) 0:00:26.710 ***** 2026-02-28 01:08:38.732178 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:08:38.732188 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-28 01:08:38.732197 | orchestrator | 2026-02-28 01:08:38.732207 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-28 01:08:38.732217 | orchestrator | Saturday 28 February 2026 01:03:17 +0000 (0:00:04.272) 0:00:30.983 ***** 2026-02-28 01:08:38.732233 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:08:38.732242 | orchestrator | 2026-02-28 01:08:38.732251 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-28 01:08:38.732260 | orchestrator | Saturday 28 February 2026 01:03:22 +0000 (0:00:05.563) 0:00:36.547 ***** 2026-02-28 01:08:38.732269 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-28 01:08:38.732278 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-28 01:08:38.732287 | orchestrator | 2026-02-28 01:08:38.732297 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-28 01:08:38.732306 | orchestrator | Saturday 28 February 2026 01:03:30 +0000 (0:00:07.585) 0:00:44.132 ***** 2026-02-28 01:08:38.732317 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.732328 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.732337 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.732348 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.732358 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.732368 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.732378 | orchestrator | 2026-02-28 01:08:38.732388 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-28 01:08:38.732398 | orchestrator | Saturday 28 February 2026 01:03:31 +0000 (0:00:00.814) 0:00:44.946 ***** 2026-02-28 01:08:38.732408 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.732418 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.732429 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.732440 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.732451 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.732459 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.732465 | orchestrator | 2026-02-28 01:08:38.732472 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-28 01:08:38.732478 | orchestrator | Saturday 28 February 2026 01:03:35 +0000 (0:00:03.855) 0:00:48.802 ***** 2026-02-28 01:08:38.732484 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:08:38.732491 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:08:38.732497 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:08:38.732503 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:08:38.732509 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:08:38.732516 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:08:38.732522 | orchestrator | 2026-02-28 01:08:38.732528 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-28 01:08:38.732535 | orchestrator | Saturday 28 February 2026 01:03:36 +0000 (0:00:01.641) 0:00:50.443 ***** 2026-02-28 01:08:38.732541 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.732547 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.732556 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.732571 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.732584 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.732594 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.732604 | orchestrator | 2026-02-28 01:08:38.732612 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-28 01:08:38.732622 | orchestrator | Saturday 28 February 2026 01:03:41 +0000 (0:00:04.203) 0:00:54.646 ***** 2026-02-28 01:08:38.732644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.732673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.732684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.732711 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:38.732722 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:38.732740 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:38.732759 | orchestrator | 2026-02-28 01:08:38.732769 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-28 01:08:38.732779 | orchestrator | Saturday 28 February 2026 01:03:47 +0000 (0:00:06.732) 0:01:01.379 ***** 2026-02-28 01:08:38.732789 | orchestrator | [WARNING]: Skipped 2026-02-28 01:08:38.732800 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-28 01:08:38.732810 | orchestrator | due to this access issue: 2026-02-28 01:08:38.732820 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-28 01:08:38.732832 | orchestrator | a directory 2026-02-28 01:08:38.732842 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:08:38.732853 | orchestrator | 2026-02-28 01:08:38.732865 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-28 01:08:38.732877 | orchestrator | Saturday 28 February 2026 01:03:49 +0000 (0:00:01.585) 0:01:02.965 ***** 2026-02-28 01:08:38.732889 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:08:38.732902 | orchestrator | 2026-02-28 01:08:38.732909 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-28 01:08:38.732920 | orchestrator | Saturday 28 February 2026 01:03:50 +0000 (0:00:01.267) 0:01:04.233 ***** 2026-02-28 01:08:38.732927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.732934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.732947 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:38.732965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.732975 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:38.732982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:38.732988 | orchestrator | 2026-02-28 01:08:38.732995 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-28 01:08:38.733001 | orchestrator | Saturday 28 February 2026 01:03:55 +0000 (0:00:04.562) 0:01:08.795 ***** 2026-02-28 01:08:38.733008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.733024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.733031 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.733038 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.733044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.733051 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.733060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.733067 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.733074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.733080 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.733087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.733098 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.733104 | orchestrator | 2026-02-28 01:08:38.733111 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-28 01:08:38.733117 | orchestrator | Saturday 28 February 2026 01:04:02 +0000 (0:00:07.563) 0:01:16.359 ***** 2026-02-28 01:08:38.733129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.733135 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.733145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.733151 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.733158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.733164 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.733171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.733561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.733582 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.733589 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.733595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.733602 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.733608 | orchestrator | 2026-02-28 01:08:38.733614 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-28 01:08:38.733621 | orchestrator | Saturday 28 February 2026 01:04:07 +0000 (0:00:04.748) 0:01:21.107 ***** 2026-02-28 01:08:38.733627 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.733633 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.733639 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.733645 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.733652 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.733658 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.733748 | orchestrator | 2026-02-28 01:08:38.733768 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-28 01:08:38.733779 | orchestrator | Saturday 28 February 2026 01:04:11 +0000 (0:00:04.117) 0:01:25.225 ***** 2026-02-28 01:08:38.733790 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.733801 | orchestrator | 2026-02-28 01:08:38.733809 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-28 01:08:38.733815 | orchestrator | Saturday 28 February 2026 01:04:11 +0000 (0:00:00.247) 0:01:25.473 ***** 2026-02-28 01:08:38.733821 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.733828 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.733834 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.733840 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.733846 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.733852 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.733858 | orchestrator | 2026-02-28 01:08:38.733865 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-28 01:08:38.733871 | orchestrator | Saturday 28 February 2026 01:04:13 +0000 (0:00:01.326) 0:01:26.800 ***** 2026-02-28 01:08:38.733885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.733892 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.733898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.733912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.733919 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.733925 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.733955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.733962 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.733969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.733980 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.733986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.733992 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.733998 | orchestrator | 2026-02-28 01:08:38.734005 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-28 01:08:38.734011 | orchestrator | Saturday 28 February 2026 01:04:18 +0000 (0:00:04.835) 0:01:31.635 ***** 2026-02-28 01:08:38.734059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.734067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.734077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.734088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:38.734095 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:38.734106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:38.734112 | orchestrator | 2026-02-28 01:08:38.734118 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-28 01:08:38.734125 | orchestrator | Saturday 28 February 2026 01:04:23 +0000 (0:00:05.713) 0:01:37.349 ***** 2026-02-28 01:08:38.734131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.734145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.734152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.734159 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:38.734170 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:38.734178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:38.734192 | orchestrator | 2026-02-28 01:08:38.734199 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-28 01:08:38.734210 | orchestrator | Saturday 28 February 2026 01:04:32 +0000 (0:00:08.796) 0:01:46.145 ***** 2026-02-28 01:08:38.734217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.734225 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.734232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.734240 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.734252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.734259 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.734267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.734278 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.734288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.734295 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.734303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.734310 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.734317 | orchestrator | 2026-02-28 01:08:38.734324 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-28 01:08:38.734331 | orchestrator | Saturday 28 February 2026 01:04:36 +0000 (0:00:04.039) 0:01:50.184 ***** 2026-02-28 01:08:38.734338 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.734345 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.734353 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.734360 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:38.734367 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:08:38.734374 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:08:38.734381 | orchestrator | 2026-02-28 01:08:38.734388 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-28 01:08:38.734396 | orchestrator | Saturday 28 February 2026 01:04:42 +0000 (0:00:05.455) 0:01:55.639 ***** 2026-02-28 01:08:38.734403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.734411 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.734421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.734433 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.734443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.734451 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.734459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.734466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.734478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.734489 | orchestrator | 2026-02-28 01:08:38.734497 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-28 01:08:38.734504 | orchestrator | Saturday 28 February 2026 01:04:49 +0000 (0:00:07.590) 0:02:03.229 ***** 2026-02-28 01:08:38.734511 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.734518 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.734526 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.734533 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.734539 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.734546 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.734552 | orchestrator | 2026-02-28 01:08:38.734558 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-28 01:08:38.734564 | orchestrator | Saturday 28 February 2026 01:04:54 +0000 (0:00:05.226) 0:02:08.456 ***** 2026-02-28 01:08:38.734570 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.734576 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.734583 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.734589 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.734595 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.734601 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.734607 | orchestrator | 2026-02-28 01:08:38.734613 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-28 01:08:38.734620 | orchestrator | Saturday 28 February 2026 01:05:00 +0000 (0:00:05.255) 0:02:13.711 ***** 2026-02-28 01:08:38.734626 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.734632 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.734638 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.734645 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.734651 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.734657 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.734663 | orchestrator | 2026-02-28 01:08:38.734669 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-28 01:08:38.734679 | orchestrator | Saturday 28 February 2026 01:05:02 +0000 (0:00:02.642) 0:02:16.354 ***** 2026-02-28 01:08:38.734704 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.734715 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.734726 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.734737 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.734747 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.734758 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.734768 | orchestrator | 2026-02-28 01:08:38.734778 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-28 01:08:38.734789 | orchestrator | Saturday 28 February 2026 01:05:05 +0000 (0:00:02.427) 0:02:18.782 ***** 2026-02-28 01:08:38.734800 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.734811 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.734818 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.734824 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.734831 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.734837 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.734843 | orchestrator | 2026-02-28 01:08:38.734849 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-28 01:08:38.734855 | orchestrator | Saturday 28 February 2026 01:05:08 +0000 (0:00:03.334) 0:02:22.117 ***** 2026-02-28 01:08:38.734861 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.734867 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.734873 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.734879 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.734885 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.734891 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.734898 | orchestrator | 2026-02-28 01:08:38.734904 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-28 01:08:38.734910 | orchestrator | Saturday 28 February 2026 01:05:12 +0000 (0:00:03.757) 0:02:25.874 ***** 2026-02-28 01:08:38.734921 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:08:38.734927 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.734933 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:08:38.734939 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.734945 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:08:38.734951 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.734958 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:08:38.734964 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.734970 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:08:38.734976 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.734982 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:08:38.734988 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.734994 | orchestrator | 2026-02-28 01:08:38.735000 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-28 01:08:38.735006 | orchestrator | Saturday 28 February 2026 01:05:15 +0000 (0:00:03.191) 0:02:29.066 ***** 2026-02-28 01:08:38.735019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.735026 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.735032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.735039 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.735049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.735059 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.735066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.735072 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.735078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.735085 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.735095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.735101 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.735108 | orchestrator | 2026-02-28 01:08:38.735114 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-28 01:08:38.735120 | orchestrator | Saturday 28 February 2026 01:05:18 +0000 (0:00:02.995) 0:02:32.062 ***** 2026-02-28 01:08:38.735129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.735140 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.735146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.735152 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.735159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.735165 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.735175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.735182 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.735188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.735194 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.735204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.735217 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.735223 | orchestrator | 2026-02-28 01:08:38.735229 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-28 01:08:38.735235 | orchestrator | Saturday 28 February 2026 01:05:22 +0000 (0:00:04.110) 0:02:36.172 ***** 2026-02-28 01:08:38.735241 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.735252 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.735268 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.735279 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.735289 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.735299 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.735308 | orchestrator | 2026-02-28 01:08:38.735319 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-28 01:08:38.735329 | orchestrator | Saturday 28 February 2026 01:05:28 +0000 (0:00:05.837) 0:02:42.010 ***** 2026-02-28 01:08:38.735339 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.735349 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.735359 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.735369 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:08:38.735379 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:08:38.735388 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:08:38.735398 | orchestrator | 2026-02-28 01:08:38.735407 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-28 01:08:38.735416 | orchestrator | Saturday 28 February 2026 01:05:33 +0000 (0:00:05.340) 0:02:47.350 ***** 2026-02-28 01:08:38.735426 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.735437 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.735448 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.735458 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.735469 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.735479 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.735489 | orchestrator | 2026-02-28 01:08:38.735500 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-28 01:08:38.735510 | orchestrator | Saturday 28 February 2026 01:05:38 +0000 (0:00:04.578) 0:02:51.929 ***** 2026-02-28 01:08:38.735521 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.735532 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.735543 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.735554 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.735564 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.735573 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.735584 | orchestrator | 2026-02-28 01:08:38.735595 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-28 01:08:38.735607 | orchestrator | Saturday 28 February 2026 01:05:44 +0000 (0:00:06.412) 0:02:58.341 ***** 2026-02-28 01:08:38.735618 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.735628 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.735638 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.735649 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.735659 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.735670 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.735680 | orchestrator | 2026-02-28 01:08:38.735710 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-28 01:08:38.735730 | orchestrator | Saturday 28 February 2026 01:05:48 +0000 (0:00:03.410) 0:03:01.751 ***** 2026-02-28 01:08:38.735758 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.735771 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.735782 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.735793 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.735803 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.735812 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.735818 | orchestrator | 2026-02-28 01:08:38.735825 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-28 01:08:38.735831 | orchestrator | Saturday 28 February 2026 01:05:51 +0000 (0:00:03.238) 0:03:04.990 ***** 2026-02-28 01:08:38.735841 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.735854 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.735868 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.735877 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.735886 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.735895 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.735904 | orchestrator | 2026-02-28 01:08:38.735914 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-28 01:08:38.735923 | orchestrator | Saturday 28 February 2026 01:05:54 +0000 (0:00:02.759) 0:03:07.749 ***** 2026-02-28 01:08:38.735933 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.735943 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.735952 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.735963 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.735974 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.735984 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.735993 | orchestrator | 2026-02-28 01:08:38.735999 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-28 01:08:38.736005 | orchestrator | Saturday 28 February 2026 01:05:58 +0000 (0:00:04.445) 0:03:12.195 ***** 2026-02-28 01:08:38.736012 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.736018 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.736024 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.736030 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.736036 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.736042 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.736048 | orchestrator | 2026-02-28 01:08:38.736060 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-28 01:08:38.736067 | orchestrator | Saturday 28 February 2026 01:06:02 +0000 (0:00:03.449) 0:03:15.645 ***** 2026-02-28 01:08:38.736073 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:08:38.736079 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.736086 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:08:38.736092 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.736098 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:08:38.736104 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.736110 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:08:38.736117 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.736123 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:08:38.736129 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.736135 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:08:38.736142 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.736148 | orchestrator | 2026-02-28 01:08:38.736154 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-28 01:08:38.736160 | orchestrator | Saturday 28 February 2026 01:06:04 +0000 (0:00:02.355) 0:03:18.000 ***** 2026-02-28 01:08:38.736173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.736180 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.736193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.736200 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.736207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:38.736217 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.736234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.736249 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.736259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.736276 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.736287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:38.736297 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.736306 | orchestrator | 2026-02-28 01:08:38.736315 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-28 01:08:38.736326 | orchestrator | Saturday 28 February 2026 01:06:07 +0000 (0:00:02.663) 0:03:20.664 ***** 2026-02-28 01:08:38.736343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.736358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.736368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:38.736387 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:38.736403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:38.736413 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:38.736424 | orchestrator | 2026-02-28 01:08:38.736434 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-28 01:08:38.736443 | orchestrator | Saturday 28 February 2026 01:06:13 +0000 (0:00:06.097) 0:03:26.761 ***** 2026-02-28 01:08:38.736453 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:38.736463 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:38.736473 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:38.736482 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:38.736492 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:38.736502 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:38.736512 | orchestrator | 2026-02-28 01:08:38.736522 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-28 01:08:38.736533 | orchestrator | Saturday 28 February 2026 01:06:14 +0000 (0:00:00.902) 0:03:27.664 ***** 2026-02-28 01:08:38.736544 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:38.736555 | orchestrator | 2026-02-28 01:08:38.736565 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-28 01:08:38.736579 | orchestrator | Saturday 28 February 2026 01:06:16 +0000 (0:00:02.523) 0:03:30.187 ***** 2026-02-28 01:08:38.736590 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:38.736609 | orchestrator | 2026-02-28 01:08:38.736619 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-28 01:08:38.736630 | orchestrator | Saturday 28 February 2026 01:06:19 +0000 (0:00:02.681) 0:03:32.869 ***** 2026-02-28 01:08:38.736640 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:38.736651 | orchestrator | 2026-02-28 01:08:38.736659 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:08:38.736665 | orchestrator | Saturday 28 February 2026 01:07:11 +0000 (0:00:52.324) 0:04:25.193 ***** 2026-02-28 01:08:38.736672 | orchestrator | 2026-02-28 01:08:38.736678 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:08:38.736684 | orchestrator | Saturday 28 February 2026 01:07:11 +0000 (0:00:00.112) 0:04:25.306 ***** 2026-02-28 01:08:38.736765 | orchestrator | 2026-02-28 01:08:38.736772 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:08:38.736778 | orchestrator | Saturday 28 February 2026 01:07:12 +0000 (0:00:00.318) 0:04:25.624 ***** 2026-02-28 01:08:38.736784 | orchestrator | 2026-02-28 01:08:38.736790 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:08:38.736797 | orchestrator | Saturday 28 February 2026 01:07:12 +0000 (0:00:00.072) 0:04:25.696 ***** 2026-02-28 01:08:38.736803 | orchestrator | 2026-02-28 01:08:38.736809 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:08:38.736815 | orchestrator | Saturday 28 February 2026 01:07:12 +0000 (0:00:00.085) 0:04:25.782 ***** 2026-02-28 01:08:38.736821 | orchestrator | 2026-02-28 01:08:38.736828 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:08:38.736834 | orchestrator | Saturday 28 February 2026 01:07:12 +0000 (0:00:00.076) 0:04:25.858 ***** 2026-02-28 01:08:38.736840 | orchestrator | 2026-02-28 01:08:38.736846 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-28 01:08:38.736853 | orchestrator | Saturday 28 February 2026 01:07:12 +0000 (0:00:00.078) 0:04:25.936 ***** 2026-02-28 01:08:38.736859 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:38.736865 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:08:38.736871 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:08:38.736877 | orchestrator | 2026-02-28 01:08:38.736884 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-28 01:08:38.736890 | orchestrator | Saturday 28 February 2026 01:07:40 +0000 (0:00:27.839) 0:04:53.776 ***** 2026-02-28 01:08:38.736898 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:08:38.736909 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:08:38.736918 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:08:38.736929 | orchestrator | 2026-02-28 01:08:38.736939 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:08:38.736951 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 01:08:38.736962 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-28 01:08:38.736972 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-28 01:08:38.736983 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 01:08:38.737001 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 01:08:38.737012 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 01:08:38.737023 | orchestrator | 2026-02-28 01:08:38.737033 | orchestrator | 2026-02-28 01:08:38.737045 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:08:38.737051 | orchestrator | Saturday 28 February 2026 01:08:37 +0000 (0:00:56.965) 0:05:50.741 ***** 2026-02-28 01:08:38.737058 | orchestrator | =============================================================================== 2026-02-28 01:08:38.737064 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 56.97s 2026-02-28 01:08:38.737070 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 52.32s 2026-02-28 01:08:38.737076 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.84s 2026-02-28 01:08:38.737083 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.80s 2026-02-28 01:08:38.737089 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 8.56s 2026-02-28 01:08:38.737095 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 7.59s 2026-02-28 01:08:38.737101 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.58s 2026-02-28 01:08:38.737107 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 7.56s 2026-02-28 01:08:38.737114 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 6.73s 2026-02-28 01:08:38.737120 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 6.41s 2026-02-28 01:08:38.737126 | orchestrator | neutron : Check neutron containers -------------------------------------- 6.10s 2026-02-28 01:08:38.737132 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 5.84s 2026-02-28 01:08:38.737142 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.71s 2026-02-28 01:08:38.737149 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 5.57s 2026-02-28 01:08:38.737155 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 5.46s 2026-02-28 01:08:38.737161 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.34s 2026-02-28 01:08:38.737167 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 5.26s 2026-02-28 01:08:38.737173 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 5.23s 2026-02-28 01:08:38.737179 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.84s 2026-02-28 01:08:38.737186 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.75s 2026-02-28 01:08:38.737192 | orchestrator | 2026-02-28 01:08:38 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:38.737198 | orchestrator | 2026-02-28 01:08:38 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:38.737204 | orchestrator | 2026-02-28 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:41.797340 | orchestrator | 2026-02-28 01:08:41 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:08:41.797432 | orchestrator | 2026-02-28 01:08:41 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:08:41.799047 | orchestrator | 2026-02-28 01:08:41 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:41.800995 | orchestrator | 2026-02-28 01:08:41 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:41.801051 | orchestrator | 2026-02-28 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:44.837542 | orchestrator | 2026-02-28 01:08:44 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state STARTED 2026-02-28 01:08:44.837963 | orchestrator | 2026-02-28 01:08:44 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:08:44.838485 | orchestrator | 2026-02-28 01:08:44 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:44.839423 | orchestrator | 2026-02-28 01:08:44 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:44.839475 | orchestrator | 2026-02-28 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:47.873421 | orchestrator | 2026-02-28 01:08:47 | INFO  | Task eb8bde0d-aec1-4d55-8011-816989869f56 is in state SUCCESS 2026-02-28 01:08:47.874511 | orchestrator | 2026-02-28 01:08:47.874544 | orchestrator | 2026-02-28 01:08:47.874553 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:08:47.874562 | orchestrator | 2026-02-28 01:08:47.874570 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:08:47.874578 | orchestrator | Saturday 28 February 2026 01:06:47 +0000 (0:00:00.338) 0:00:00.338 ***** 2026-02-28 01:08:47.874586 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:08:47.874595 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:08:47.874603 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:08:47.874611 | orchestrator | 2026-02-28 01:08:47.874619 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:08:47.874627 | orchestrator | Saturday 28 February 2026 01:06:48 +0000 (0:00:00.322) 0:00:00.661 ***** 2026-02-28 01:08:47.874635 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-28 01:08:47.874644 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-28 01:08:47.874652 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-28 01:08:47.874665 | orchestrator | 2026-02-28 01:08:47.874709 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-28 01:08:47.874727 | orchestrator | 2026-02-28 01:08:47.874741 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-28 01:08:47.874754 | orchestrator | Saturday 28 February 2026 01:06:48 +0000 (0:00:00.446) 0:00:01.108 ***** 2026-02-28 01:08:47.874766 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:08:47.875062 | orchestrator | 2026-02-28 01:08:47.875090 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-28 01:08:47.875105 | orchestrator | Saturday 28 February 2026 01:06:49 +0000 (0:00:00.678) 0:00:01.786 ***** 2026-02-28 01:08:47.875173 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-28 01:08:47.875182 | orchestrator | 2026-02-28 01:08:47.875190 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-28 01:08:47.875198 | orchestrator | Saturday 28 February 2026 01:06:53 +0000 (0:00:03.759) 0:00:05.546 ***** 2026-02-28 01:08:47.875206 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-28 01:08:47.875214 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-28 01:08:47.875222 | orchestrator | 2026-02-28 01:08:47.875230 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-28 01:08:47.875238 | orchestrator | Saturday 28 February 2026 01:07:00 +0000 (0:00:07.061) 0:00:12.608 ***** 2026-02-28 01:08:47.875246 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:08:47.875254 | orchestrator | 2026-02-28 01:08:47.875273 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-28 01:08:47.875282 | orchestrator | Saturday 28 February 2026 01:07:03 +0000 (0:00:03.476) 0:00:16.084 ***** 2026-02-28 01:08:47.875290 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:08:47.875298 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-28 01:08:47.875312 | orchestrator | 2026-02-28 01:08:47.875320 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-28 01:08:47.875328 | orchestrator | Saturday 28 February 2026 01:07:07 +0000 (0:00:04.004) 0:00:20.089 ***** 2026-02-28 01:08:47.875335 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:08:47.875343 | orchestrator | 2026-02-28 01:08:47.875368 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-28 01:08:47.875377 | orchestrator | Saturday 28 February 2026 01:07:11 +0000 (0:00:03.808) 0:00:23.897 ***** 2026-02-28 01:08:47.875384 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-28 01:08:47.875392 | orchestrator | 2026-02-28 01:08:47.875400 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-28 01:08:47.875408 | orchestrator | Saturday 28 February 2026 01:07:15 +0000 (0:00:04.265) 0:00:28.163 ***** 2026-02-28 01:08:47.875416 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:47.875424 | orchestrator | 2026-02-28 01:08:47.875432 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-28 01:08:47.875440 | orchestrator | Saturday 28 February 2026 01:07:19 +0000 (0:00:03.502) 0:00:31.666 ***** 2026-02-28 01:08:47.875448 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:47.875456 | orchestrator | 2026-02-28 01:08:47.875464 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-28 01:08:47.875472 | orchestrator | Saturday 28 February 2026 01:07:23 +0000 (0:00:04.150) 0:00:35.817 ***** 2026-02-28 01:08:47.875479 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:47.875487 | orchestrator | 2026-02-28 01:08:47.875495 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-28 01:08:47.875503 | orchestrator | Saturday 28 February 2026 01:07:27 +0000 (0:00:03.843) 0:00:39.661 ***** 2026-02-28 01:08:47.875526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.875538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.875551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.875566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.875575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.875591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.875600 | orchestrator | 2026-02-28 01:08:47.875608 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-28 01:08:47.875616 | orchestrator | Saturday 28 February 2026 01:07:28 +0000 (0:00:01.552) 0:00:41.213 ***** 2026-02-28 01:08:47.875624 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:47.875632 | orchestrator | 2026-02-28 01:08:47.875640 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-28 01:08:47.875652 | orchestrator | Saturday 28 February 2026 01:07:28 +0000 (0:00:00.152) 0:00:41.365 ***** 2026-02-28 01:08:47.875666 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:47.875712 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:47.875722 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:47.875740 | orchestrator | 2026-02-28 01:08:47.875748 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-28 01:08:47.875757 | orchestrator | Saturday 28 February 2026 01:07:29 +0000 (0:00:00.635) 0:00:42.001 ***** 2026-02-28 01:08:47.875764 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:08:47.875772 | orchestrator | 2026-02-28 01:08:47.875780 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-28 01:08:47.875788 | orchestrator | Saturday 28 February 2026 01:07:30 +0000 (0:00:01.107) 0:00:43.108 ***** 2026-02-28 01:08:47.875801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.875819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.875830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.875922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.875933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.876000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.876013 | orchestrator | 2026-02-28 01:08:47.876023 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-28 01:08:47.876032 | orchestrator | Saturday 28 February 2026 01:07:33 +0000 (0:00:02.684) 0:00:45.793 ***** 2026-02-28 01:08:47.876041 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:08:47.876049 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:08:47.876057 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:08:47.876065 | orchestrator | 2026-02-28 01:08:47.876073 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-28 01:08:47.876082 | orchestrator | Saturday 28 February 2026 01:07:33 +0000 (0:00:00.345) 0:00:46.138 ***** 2026-02-28 01:08:47.876090 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:08:47.876098 | orchestrator | 2026-02-28 01:08:47.876106 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-28 01:08:47.876114 | orchestrator | Saturday 28 February 2026 01:07:34 +0000 (0:00:00.950) 0:00:47.088 ***** 2026-02-28 01:08:47.876123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.876138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.876147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.876165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.876174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.876183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.876191 | orchestrator | 2026-02-28 01:08:47.876199 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-28 01:08:47.876207 | orchestrator | Saturday 28 February 2026 01:07:37 +0000 (0:00:02.347) 0:00:49.436 ***** 2026-02-28 01:08:47.876221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:08:47.876234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:08:47.876243 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:47.876255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:08:47.876264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:08:47.876273 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:47.876282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:08:47.876304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:08:47.876326 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:47.876340 | orchestrator | 2026-02-28 01:08:47.876354 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-28 01:08:47.876368 | orchestrator | Saturday 28 February 2026 01:07:37 +0000 (0:00:00.713) 0:00:50.150 ***** 2026-02-28 01:08:47.876380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:08:47.876401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:08:47.876417 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:47.876432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:08:47.876454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:08:47.876479 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:47.876488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:08:47.876496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:08:47.876504 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:47.876512 | orchestrator | 2026-02-28 01:08:47.876526 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-28 01:08:47.876535 | orchestrator | Saturday 28 February 2026 01:07:39 +0000 (0:00:01.669) 0:00:51.819 ***** 2026-02-28 01:08:47.876543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.876552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.876571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.876580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.876591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.876600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.876609 | orchestrator | 2026-02-28 01:08:47.876617 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-28 01:08:47.876625 | orchestrator | Saturday 28 February 2026 01:07:42 +0000 (0:00:02.721) 0:00:54.541 ***** 2026-02-28 01:08:47.876633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.876651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.876660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.876672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.876680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.876717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.876731 | orchestrator | 2026-02-28 01:08:47.876739 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-28 01:08:47.876752 | orchestrator | Saturday 28 February 2026 01:07:50 +0000 (0:00:08.751) 0:01:03.292 ***** 2026-02-28 01:08:47.876761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:08:47.876769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:08:47.876778 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:47.876790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:08:47.876798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:08:47.876812 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:47.876826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:08:47.876834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:08:47.876843 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:47.876851 | orchestrator | 2026-02-28 01:08:47.876859 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-28 01:08:47.876867 | orchestrator | Saturday 28 February 2026 01:07:51 +0000 (0:00:00.975) 0:01:04.267 ***** 2026-02-28 01:08:47.876878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.876887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.876900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:08:47.876913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.876921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.876934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:08:47.876942 | orchestrator | 2026-02-28 01:08:47.876951 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-28 01:08:47.876959 | orchestrator | Saturday 28 February 2026 01:07:54 +0000 (0:00:02.860) 0:01:07.128 ***** 2026-02-28 01:08:47.876967 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:47.876977 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:47.876992 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:47.877006 | orchestrator | 2026-02-28 01:08:47.877021 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-28 01:08:47.877043 | orchestrator | Saturday 28 February 2026 01:07:55 +0000 (0:00:00.349) 0:01:07.477 ***** 2026-02-28 01:08:47.877057 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:47.877073 | orchestrator | 2026-02-28 01:08:47.877089 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-28 01:08:47.877103 | orchestrator | Saturday 28 February 2026 01:07:57 +0000 (0:00:02.554) 0:01:10.031 ***** 2026-02-28 01:08:47.877112 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:47.877119 | orchestrator | 2026-02-28 01:08:47.877127 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-28 01:08:47.877135 | orchestrator | Saturday 28 February 2026 01:07:59 +0000 (0:00:02.206) 0:01:12.237 ***** 2026-02-28 01:08:47.877143 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:47.877151 | orchestrator | 2026-02-28 01:08:47.877159 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-28 01:08:47.877171 | orchestrator | Saturday 28 February 2026 01:08:17 +0000 (0:00:17.294) 0:01:29.532 ***** 2026-02-28 01:08:47.877184 | orchestrator | 2026-02-28 01:08:47.877197 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-28 01:08:47.877210 | orchestrator | Saturday 28 February 2026 01:08:17 +0000 (0:00:00.066) 0:01:29.599 ***** 2026-02-28 01:08:47.877223 | orchestrator | 2026-02-28 01:08:47.877237 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-28 01:08:47.877250 | orchestrator | Saturday 28 February 2026 01:08:17 +0000 (0:00:00.061) 0:01:29.660 ***** 2026-02-28 01:08:47.877265 | orchestrator | 2026-02-28 01:08:47.877273 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-28 01:08:47.877280 | orchestrator | Saturday 28 February 2026 01:08:17 +0000 (0:00:00.070) 0:01:29.731 ***** 2026-02-28 01:08:47.877288 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:47.877296 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:08:47.877304 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:08:47.877312 | orchestrator | 2026-02-28 01:08:47.877320 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-28 01:08:47.877328 | orchestrator | Saturday 28 February 2026 01:08:34 +0000 (0:00:17.193) 0:01:46.924 ***** 2026-02-28 01:08:47.877336 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:47.877344 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:08:47.877352 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:08:47.877359 | orchestrator | 2026-02-28 01:08:47.877374 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:08:47.877383 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:08:47.877392 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 01:08:47.877400 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 01:08:47.877408 | orchestrator | 2026-02-28 01:08:47.877415 | orchestrator | 2026-02-28 01:08:47.877424 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:08:47.877435 | orchestrator | Saturday 28 February 2026 01:08:45 +0000 (0:00:11.298) 0:01:58.222 ***** 2026-02-28 01:08:47.877448 | orchestrator | =============================================================================== 2026-02-28 01:08:47.877464 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.29s 2026-02-28 01:08:47.877484 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 17.19s 2026-02-28 01:08:47.877497 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.30s 2026-02-28 01:08:47.877509 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 8.75s 2026-02-28 01:08:47.877522 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.06s 2026-02-28 01:08:47.877548 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.27s 2026-02-28 01:08:47.877562 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.15s 2026-02-28 01:08:47.877575 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.00s 2026-02-28 01:08:47.877589 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.84s 2026-02-28 01:08:47.877602 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.81s 2026-02-28 01:08:47.877616 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.76s 2026-02-28 01:08:47.877624 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.50s 2026-02-28 01:08:47.877632 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.48s 2026-02-28 01:08:47.877645 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.86s 2026-02-28 01:08:47.877653 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.72s 2026-02-28 01:08:47.877661 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.68s 2026-02-28 01:08:47.877669 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.55s 2026-02-28 01:08:47.877677 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.35s 2026-02-28 01:08:47.877794 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.21s 2026-02-28 01:08:47.877823 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.67s 2026-02-28 01:08:47.877832 | orchestrator | 2026-02-28 01:08:47 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:08:47.877840 | orchestrator | 2026-02-28 01:08:47 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:08:47.877848 | orchestrator | 2026-02-28 01:08:47 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:47.877856 | orchestrator | 2026-02-28 01:08:47 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:47.877864 | orchestrator | 2026-02-28 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:50.898917 | orchestrator | 2026-02-28 01:08:50 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:08:50.900055 | orchestrator | 2026-02-28 01:08:50 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:08:50.900857 | orchestrator | 2026-02-28 01:08:50 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:50.902810 | orchestrator | 2026-02-28 01:08:50 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:50.902861 | orchestrator | 2026-02-28 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:53.957917 | orchestrator | 2026-02-28 01:08:53 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:08:53.961140 | orchestrator | 2026-02-28 01:08:53 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:08:53.963318 | orchestrator | 2026-02-28 01:08:53 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:53.963366 | orchestrator | 2026-02-28 01:08:53 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:53.963376 | orchestrator | 2026-02-28 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:56.987506 | orchestrator | 2026-02-28 01:08:56 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:08:56.987931 | orchestrator | 2026-02-28 01:08:56 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:08:56.988934 | orchestrator | 2026-02-28 01:08:56 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:08:56.990058 | orchestrator | 2026-02-28 01:08:56 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:08:56.990081 | orchestrator | 2026-02-28 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:00.044466 | orchestrator | 2026-02-28 01:09:00 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:00.044545 | orchestrator | 2026-02-28 01:09:00 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:00.044557 | orchestrator | 2026-02-28 01:09:00 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:00.044568 | orchestrator | 2026-02-28 01:09:00 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:00.044578 | orchestrator | 2026-02-28 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:03.122404 | orchestrator | 2026-02-28 01:09:03 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:03.123024 | orchestrator | 2026-02-28 01:09:03 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:03.123906 | orchestrator | 2026-02-28 01:09:03 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:03.124952 | orchestrator | 2026-02-28 01:09:03 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:03.124982 | orchestrator | 2026-02-28 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:06.147074 | orchestrator | 2026-02-28 01:09:06 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:06.147888 | orchestrator | 2026-02-28 01:09:06 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:06.149043 | orchestrator | 2026-02-28 01:09:06 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:06.149911 | orchestrator | 2026-02-28 01:09:06 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:06.149975 | orchestrator | 2026-02-28 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:09.174542 | orchestrator | 2026-02-28 01:09:09 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:09.175791 | orchestrator | 2026-02-28 01:09:09 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:09.177940 | orchestrator | 2026-02-28 01:09:09 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:09.179543 | orchestrator | 2026-02-28 01:09:09 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:09.180046 | orchestrator | 2026-02-28 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:12.216004 | orchestrator | 2026-02-28 01:09:12 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:12.216455 | orchestrator | 2026-02-28 01:09:12 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:12.219637 | orchestrator | 2026-02-28 01:09:12 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:12.220304 | orchestrator | 2026-02-28 01:09:12 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:12.220334 | orchestrator | 2026-02-28 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:15.267347 | orchestrator | 2026-02-28 01:09:15 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:15.267963 | orchestrator | 2026-02-28 01:09:15 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:15.268906 | orchestrator | 2026-02-28 01:09:15 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:15.269803 | orchestrator | 2026-02-28 01:09:15 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:15.269850 | orchestrator | 2026-02-28 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:18.304801 | orchestrator | 2026-02-28 01:09:18 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:18.304962 | orchestrator | 2026-02-28 01:09:18 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:18.306124 | orchestrator | 2026-02-28 01:09:18 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:18.308573 | orchestrator | 2026-02-28 01:09:18 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:18.308618 | orchestrator | 2026-02-28 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:21.357515 | orchestrator | 2026-02-28 01:09:21 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:21.361896 | orchestrator | 2026-02-28 01:09:21 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:21.363705 | orchestrator | 2026-02-28 01:09:21 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:21.366618 | orchestrator | 2026-02-28 01:09:21 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:21.367710 | orchestrator | 2026-02-28 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:24.407392 | orchestrator | 2026-02-28 01:09:24 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:24.408182 | orchestrator | 2026-02-28 01:09:24 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:24.408843 | orchestrator | 2026-02-28 01:09:24 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:24.410110 | orchestrator | 2026-02-28 01:09:24 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:24.410155 | orchestrator | 2026-02-28 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:27.438473 | orchestrator | 2026-02-28 01:09:27 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:27.439092 | orchestrator | 2026-02-28 01:09:27 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:27.440019 | orchestrator | 2026-02-28 01:09:27 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:27.441056 | orchestrator | 2026-02-28 01:09:27 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:27.441106 | orchestrator | 2026-02-28 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:30.480167 | orchestrator | 2026-02-28 01:09:30 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:30.481039 | orchestrator | 2026-02-28 01:09:30 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:30.482489 | orchestrator | 2026-02-28 01:09:30 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:30.483557 | orchestrator | 2026-02-28 01:09:30 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:30.483668 | orchestrator | 2026-02-28 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:33.525168 | orchestrator | 2026-02-28 01:09:33 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:33.525614 | orchestrator | 2026-02-28 01:09:33 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:33.527343 | orchestrator | 2026-02-28 01:09:33 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:33.528163 | orchestrator | 2026-02-28 01:09:33 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:33.528281 | orchestrator | 2026-02-28 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:36.568669 | orchestrator | 2026-02-28 01:09:36 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:36.569270 | orchestrator | 2026-02-28 01:09:36 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:36.570189 | orchestrator | 2026-02-28 01:09:36 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:36.571511 | orchestrator | 2026-02-28 01:09:36 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:36.571618 | orchestrator | 2026-02-28 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:39.613118 | orchestrator | 2026-02-28 01:09:39 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:39.620858 | orchestrator | 2026-02-28 01:09:39 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:39.625979 | orchestrator | 2026-02-28 01:09:39 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:39.631734 | orchestrator | 2026-02-28 01:09:39 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:39.631841 | orchestrator | 2026-02-28 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:42.676932 | orchestrator | 2026-02-28 01:09:42 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:42.679581 | orchestrator | 2026-02-28 01:09:42 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:42.684691 | orchestrator | 2026-02-28 01:09:42 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:42.684746 | orchestrator | 2026-02-28 01:09:42 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:42.684755 | orchestrator | 2026-02-28 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:45.738292 | orchestrator | 2026-02-28 01:09:45 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:45.740942 | orchestrator | 2026-02-28 01:09:45 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:45.743439 | orchestrator | 2026-02-28 01:09:45 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:45.745217 | orchestrator | 2026-02-28 01:09:45 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:45.745276 | orchestrator | 2026-02-28 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:48.789382 | orchestrator | 2026-02-28 01:09:48 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:48.792128 | orchestrator | 2026-02-28 01:09:48 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:48.794929 | orchestrator | 2026-02-28 01:09:48 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:48.797396 | orchestrator | 2026-02-28 01:09:48 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:48.797523 | orchestrator | 2026-02-28 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:51.865068 | orchestrator | 2026-02-28 01:09:51 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:51.865188 | orchestrator | 2026-02-28 01:09:51 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:51.865212 | orchestrator | 2026-02-28 01:09:51 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:51.865889 | orchestrator | 2026-02-28 01:09:51 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:51.865940 | orchestrator | 2026-02-28 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:54.922158 | orchestrator | 2026-02-28 01:09:54 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:54.922465 | orchestrator | 2026-02-28 01:09:54 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:54.924775 | orchestrator | 2026-02-28 01:09:54 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:54.926662 | orchestrator | 2026-02-28 01:09:54 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:54.926920 | orchestrator | 2026-02-28 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:57.977053 | orchestrator | 2026-02-28 01:09:57 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:09:57.980545 | orchestrator | 2026-02-28 01:09:57 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:09:57.983229 | orchestrator | 2026-02-28 01:09:57 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:09:57.985441 | orchestrator | 2026-02-28 01:09:57 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:09:57.985659 | orchestrator | 2026-02-28 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:01.038229 | orchestrator | 2026-02-28 01:10:01 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:01.039237 | orchestrator | 2026-02-28 01:10:01 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:01.040766 | orchestrator | 2026-02-28 01:10:01 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:01.041795 | orchestrator | 2026-02-28 01:10:01 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:01.041844 | orchestrator | 2026-02-28 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:04.080914 | orchestrator | 2026-02-28 01:10:04 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:04.081645 | orchestrator | 2026-02-28 01:10:04 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:04.082627 | orchestrator | 2026-02-28 01:10:04 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:04.084146 | orchestrator | 2026-02-28 01:10:04 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:04.084183 | orchestrator | 2026-02-28 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:07.118910 | orchestrator | 2026-02-28 01:10:07 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:07.120655 | orchestrator | 2026-02-28 01:10:07 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:07.121701 | orchestrator | 2026-02-28 01:10:07 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:07.123169 | orchestrator | 2026-02-28 01:10:07 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:07.123372 | orchestrator | 2026-02-28 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:10.164210 | orchestrator | 2026-02-28 01:10:10 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:10.165465 | orchestrator | 2026-02-28 01:10:10 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:10.166706 | orchestrator | 2026-02-28 01:10:10 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:10.167716 | orchestrator | 2026-02-28 01:10:10 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:10.167747 | orchestrator | 2026-02-28 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:13.212150 | orchestrator | 2026-02-28 01:10:13 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:13.212780 | orchestrator | 2026-02-28 01:10:13 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:13.214647 | orchestrator | 2026-02-28 01:10:13 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:13.215957 | orchestrator | 2026-02-28 01:10:13 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:13.215984 | orchestrator | 2026-02-28 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:16.256258 | orchestrator | 2026-02-28 01:10:16 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:16.258527 | orchestrator | 2026-02-28 01:10:16 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:16.259936 | orchestrator | 2026-02-28 01:10:16 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:16.261369 | orchestrator | 2026-02-28 01:10:16 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:16.261406 | orchestrator | 2026-02-28 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:19.308712 | orchestrator | 2026-02-28 01:10:19 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:19.309969 | orchestrator | 2026-02-28 01:10:19 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:19.311928 | orchestrator | 2026-02-28 01:10:19 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:19.313905 | orchestrator | 2026-02-28 01:10:19 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:19.313986 | orchestrator | 2026-02-28 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:22.350124 | orchestrator | 2026-02-28 01:10:22 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:22.350788 | orchestrator | 2026-02-28 01:10:22 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:22.351935 | orchestrator | 2026-02-28 01:10:22 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:22.352644 | orchestrator | 2026-02-28 01:10:22 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:22.352924 | orchestrator | 2026-02-28 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:25.396019 | orchestrator | 2026-02-28 01:10:25 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:25.397101 | orchestrator | 2026-02-28 01:10:25 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:25.398420 | orchestrator | 2026-02-28 01:10:25 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:25.400403 | orchestrator | 2026-02-28 01:10:25 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:25.400438 | orchestrator | 2026-02-28 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:28.446772 | orchestrator | 2026-02-28 01:10:28 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:28.447893 | orchestrator | 2026-02-28 01:10:28 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:28.449617 | orchestrator | 2026-02-28 01:10:28 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:28.452146 | orchestrator | 2026-02-28 01:10:28 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:28.452461 | orchestrator | 2026-02-28 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:31.497743 | orchestrator | 2026-02-28 01:10:31 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:31.498480 | orchestrator | 2026-02-28 01:10:31 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:31.499577 | orchestrator | 2026-02-28 01:10:31 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:31.502943 | orchestrator | 2026-02-28 01:10:31 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:31.502980 | orchestrator | 2026-02-28 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:34.555029 | orchestrator | 2026-02-28 01:10:34 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:34.555124 | orchestrator | 2026-02-28 01:10:34 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:34.555727 | orchestrator | 2026-02-28 01:10:34 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:34.556331 | orchestrator | 2026-02-28 01:10:34 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:34.556362 | orchestrator | 2026-02-28 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:37.586081 | orchestrator | 2026-02-28 01:10:37 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:37.586344 | orchestrator | 2026-02-28 01:10:37 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:37.587063 | orchestrator | 2026-02-28 01:10:37 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:37.587744 | orchestrator | 2026-02-28 01:10:37 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:37.587796 | orchestrator | 2026-02-28 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:40.631845 | orchestrator | 2026-02-28 01:10:40 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:40.632421 | orchestrator | 2026-02-28 01:10:40 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:40.633816 | orchestrator | 2026-02-28 01:10:40 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:40.635925 | orchestrator | 2026-02-28 01:10:40 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:40.635957 | orchestrator | 2026-02-28 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:43.679611 | orchestrator | 2026-02-28 01:10:43 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:43.680247 | orchestrator | 2026-02-28 01:10:43 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:43.681097 | orchestrator | 2026-02-28 01:10:43 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:43.682177 | orchestrator | 2026-02-28 01:10:43 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:43.682208 | orchestrator | 2026-02-28 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:46.725404 | orchestrator | 2026-02-28 01:10:46 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:46.726502 | orchestrator | 2026-02-28 01:10:46 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:46.727883 | orchestrator | 2026-02-28 01:10:46 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:46.729745 | orchestrator | 2026-02-28 01:10:46 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:46.730251 | orchestrator | 2026-02-28 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:49.778108 | orchestrator | 2026-02-28 01:10:49 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:49.778875 | orchestrator | 2026-02-28 01:10:49 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:49.779410 | orchestrator | 2026-02-28 01:10:49 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:49.780501 | orchestrator | 2026-02-28 01:10:49 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:49.780539 | orchestrator | 2026-02-28 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:52.830630 | orchestrator | 2026-02-28 01:10:52 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:52.832799 | orchestrator | 2026-02-28 01:10:52 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:52.833896 | orchestrator | 2026-02-28 01:10:52 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:52.836623 | orchestrator | 2026-02-28 01:10:52 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:52.836691 | orchestrator | 2026-02-28 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:55.873685 | orchestrator | 2026-02-28 01:10:55 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:55.878753 | orchestrator | 2026-02-28 01:10:55 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:55.880413 | orchestrator | 2026-02-28 01:10:55 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:55.881818 | orchestrator | 2026-02-28 01:10:55 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:55.882168 | orchestrator | 2026-02-28 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:58.924798 | orchestrator | 2026-02-28 01:10:58 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:10:58.925250 | orchestrator | 2026-02-28 01:10:58 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:10:58.926774 | orchestrator | 2026-02-28 01:10:58 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:10:58.929594 | orchestrator | 2026-02-28 01:10:58 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:10:58.929701 | orchestrator | 2026-02-28 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:01.968078 | orchestrator | 2026-02-28 01:11:01 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:01.969355 | orchestrator | 2026-02-28 01:11:01 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:11:01.970601 | orchestrator | 2026-02-28 01:11:01 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state STARTED 2026-02-28 01:11:01.971994 | orchestrator | 2026-02-28 01:11:01 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state STARTED 2026-02-28 01:11:01.972076 | orchestrator | 2026-02-28 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:05.041083 | orchestrator | 2026-02-28 01:11:05 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:05.042068 | orchestrator | 2026-02-28 01:11:05 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:11:05.044909 | orchestrator | 2026-02-28 01:11:05 | INFO  | Task 88f3f13a-1648-4d2d-b2c9-9bccf8c5e130 is in state SUCCESS 2026-02-28 01:11:05.046746 | orchestrator | 2026-02-28 01:11:05.046797 | orchestrator | 2026-02-28 01:11:05.046804 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:11:05.046810 | orchestrator | 2026-02-28 01:11:05.046815 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:11:05.046820 | orchestrator | Saturday 28 February 2026 01:07:51 +0000 (0:00:00.416) 0:00:00.416 ***** 2026-02-28 01:11:05.046824 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:11:05.046830 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:11:05.046834 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:11:05.046838 | orchestrator | 2026-02-28 01:11:05.046842 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:11:05.046847 | orchestrator | Saturday 28 February 2026 01:07:51 +0000 (0:00:00.398) 0:00:00.815 ***** 2026-02-28 01:11:05.046851 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-28 01:11:05.046856 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-28 01:11:05.046860 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-28 01:11:05.046864 | orchestrator | 2026-02-28 01:11:05.046868 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-28 01:11:05.046872 | orchestrator | 2026-02-28 01:11:05.046877 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-28 01:11:05.046881 | orchestrator | Saturday 28 February 2026 01:07:52 +0000 (0:00:00.886) 0:00:01.701 ***** 2026-02-28 01:11:05.046885 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:11:05.046890 | orchestrator | 2026-02-28 01:11:05.046894 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-28 01:11:05.046898 | orchestrator | Saturday 28 February 2026 01:07:53 +0000 (0:00:00.637) 0:00:02.338 ***** 2026-02-28 01:11:05.046903 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-28 01:11:05.046907 | orchestrator | 2026-02-28 01:11:05.046911 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-28 01:11:05.046915 | orchestrator | Saturday 28 February 2026 01:07:57 +0000 (0:00:04.169) 0:00:06.509 ***** 2026-02-28 01:11:05.046920 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-28 01:11:05.046942 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-28 01:11:05.046947 | orchestrator | 2026-02-28 01:11:05.046951 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-28 01:11:05.046955 | orchestrator | Saturday 28 February 2026 01:08:03 +0000 (0:00:05.602) 0:00:12.111 ***** 2026-02-28 01:11:05.046959 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:11:05.046964 | orchestrator | 2026-02-28 01:11:05.046979 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-28 01:11:05.046984 | orchestrator | Saturday 28 February 2026 01:08:06 +0000 (0:00:03.026) 0:00:15.137 ***** 2026-02-28 01:11:05.046988 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:11:05.046992 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-28 01:11:05.046996 | orchestrator | 2026-02-28 01:11:05.047000 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-28 01:11:05.047005 | orchestrator | Saturday 28 February 2026 01:08:09 +0000 (0:00:03.855) 0:00:18.993 ***** 2026-02-28 01:11:05.047009 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:11:05.047013 | orchestrator | 2026-02-28 01:11:05.047017 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-28 01:11:05.047021 | orchestrator | Saturday 28 February 2026 01:08:13 +0000 (0:00:03.807) 0:00:22.800 ***** 2026-02-28 01:11:05.047026 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-28 01:11:05.047030 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-28 01:11:05.047034 | orchestrator | 2026-02-28 01:11:05.047038 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-28 01:11:05.047042 | orchestrator | Saturday 28 February 2026 01:08:21 +0000 (0:00:07.915) 0:00:30.716 ***** 2026-02-28 01:11:05.047049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:11:05.047070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:11:05.047075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:11:05.047127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.047134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.047139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.047146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.047174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.047198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.047207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.047211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.047216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.047220 | orchestrator | 2026-02-28 01:11:05.047244 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-28 01:11:05.047248 | orchestrator | Saturday 28 February 2026 01:08:24 +0000 (0:00:02.555) 0:00:33.271 ***** 2026-02-28 01:11:05.047281 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.047286 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.047291 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.047317 | orchestrator | 2026-02-28 01:11:05.047321 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-28 01:11:05.047326 | orchestrator | Saturday 28 February 2026 01:08:24 +0000 (0:00:00.303) 0:00:33.575 ***** 2026-02-28 01:11:05.047349 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:11:05.047357 | orchestrator | 2026-02-28 01:11:05.047366 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-28 01:11:05.047374 | orchestrator | Saturday 28 February 2026 01:08:25 +0000 (0:00:00.701) 0:00:34.276 ***** 2026-02-28 01:11:05.047403 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-28 01:11:05.047411 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-28 01:11:05.047417 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-28 01:11:05.047425 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-28 01:11:05.047518 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-28 01:11:05.047524 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-28 01:11:05.047529 | orchestrator | 2026-02-28 01:11:05.047534 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-28 01:11:05.047539 | orchestrator | Saturday 28 February 2026 01:08:27 +0000 (0:00:01.919) 0:00:36.195 ***** 2026-02-28 01:11:05.047546 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-28 01:11:05.047557 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-28 01:11:05.047562 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-28 01:11:05.047566 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-28 01:11:05.047580 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-28 01:11:05.047585 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-28 01:11:05.047592 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-28 01:11:05.047598 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-28 01:11:05.047602 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-28 01:11:05.047615 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-28 01:11:05.047620 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-28 01:11:05.047627 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-28 01:11:05.047631 | orchestrator | 2026-02-28 01:11:05.047656 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-28 01:11:05.047661 | orchestrator | Saturday 28 February 2026 01:08:30 +0000 (0:00:02.976) 0:00:39.172 ***** 2026-02-28 01:11:05.047665 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:11:05.047670 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:11:05.047674 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:11:05.047678 | orchestrator | 2026-02-28 01:11:05.047683 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-28 01:11:05.047687 | orchestrator | Saturday 28 February 2026 01:08:31 +0000 (0:00:01.740) 0:00:40.912 ***** 2026-02-28 01:11:05.047691 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-28 01:11:05.047695 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-28 01:11:05.047699 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-28 01:11:05.047704 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:11:05.047708 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:11:05.047712 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:11:05.047716 | orchestrator | 2026-02-28 01:11:05.047720 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-28 01:11:05.047728 | orchestrator | Saturday 28 February 2026 01:08:34 +0000 (0:00:02.990) 0:00:43.903 ***** 2026-02-28 01:11:05.047732 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-28 01:11:05.047737 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-28 01:11:05.047741 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-28 01:11:05.047745 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-28 01:11:05.047749 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-28 01:11:05.047753 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-28 01:11:05.047758 | orchestrator | 2026-02-28 01:11:05.047762 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-28 01:11:05.047766 | orchestrator | Saturday 28 February 2026 01:08:36 +0000 (0:00:01.403) 0:00:45.306 ***** 2026-02-28 01:11:05.047770 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.047774 | orchestrator | 2026-02-28 01:11:05.047778 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-28 01:11:05.047783 | orchestrator | Saturday 28 February 2026 01:08:36 +0000 (0:00:00.210) 0:00:45.517 ***** 2026-02-28 01:11:05.047787 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.047791 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.047802 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.047809 | orchestrator | 2026-02-28 01:11:05.047820 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-28 01:11:05.047828 | orchestrator | Saturday 28 February 2026 01:08:36 +0000 (0:00:00.285) 0:00:45.802 ***** 2026-02-28 01:11:05.047834 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:11:05.047841 | orchestrator | 2026-02-28 01:11:05.047847 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-28 01:11:05.047853 | orchestrator | Saturday 28 February 2026 01:08:37 +0000 (0:00:00.782) 0:00:46.584 ***** 2026-02-28 01:11:05.047861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:11:05.047873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:11:05.047880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:11:05.047898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.047911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.047919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.047926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.047938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.047950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.047957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048286 | orchestrator | 2026-02-28 01:11:05.048291 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-28 01:11:05.048295 | orchestrator | Saturday 28 February 2026 01:08:42 +0000 (0:00:05.397) 0:00:51.981 ***** 2026-02-28 01:11:05.048305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:11:05.048317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048336 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.048341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:11:05.048345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048395 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.048402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:11:05.048413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048461 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.048471 | orchestrator | 2026-02-28 01:11:05.048483 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-28 01:11:05.048489 | orchestrator | Saturday 28 February 2026 01:08:44 +0000 (0:00:01.240) 0:00:53.221 ***** 2026-02-28 01:11:05.048495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:11:05.048502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048528 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.048540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:11:05.048552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048574 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.048581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:11:05.048588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048616 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.048622 | orchestrator | 2026-02-28 01:11:05.048629 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-28 01:11:05.048659 | orchestrator | Saturday 28 February 2026 01:08:46 +0000 (0:00:02.003) 0:00:55.225 ***** 2026-02-28 01:11:05.048666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:11:05.048678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:11:05.048690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:11:05.048701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048761 | orchestrator | 2026-02-28 01:11:05.048766 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-28 01:11:05.048770 | orchestrator | Saturday 28 February 2026 01:08:51 +0000 (0:00:04.924) 0:01:00.150 ***** 2026-02-28 01:11:05.048774 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-28 01:11:05.048782 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-28 01:11:05.048786 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-28 01:11:05.048790 | orchestrator | 2026-02-28 01:11:05.048794 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-28 01:11:05.048799 | orchestrator | Saturday 28 February 2026 01:08:53 +0000 (0:00:02.095) 0:01:02.245 ***** 2026-02-28 01:11:05.048807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:11:05.048814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:11:05.048819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:11:05.048823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.048886 | orchestrator | 2026-02-28 01:11:05.048891 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-28 01:11:05.048895 | orchestrator | Saturday 28 February 2026 01:09:09 +0000 (0:00:16.168) 0:01:18.414 ***** 2026-02-28 01:11:05.048901 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.048905 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:05.048910 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:05.048915 | orchestrator | 2026-02-28 01:11:05.048920 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-28 01:11:05.048926 | orchestrator | Saturday 28 February 2026 01:09:12 +0000 (0:00:02.894) 0:01:21.308 ***** 2026-02-28 01:11:05.048933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:11:05.048939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048960 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.048965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:11:05.048971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.048989 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.048998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:11:05.049007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.049012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.049020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:11:05.049025 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.049030 | orchestrator | 2026-02-28 01:11:05.049035 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-28 01:11:05.049040 | orchestrator | Saturday 28 February 2026 01:09:13 +0000 (0:00:01.080) 0:01:22.389 ***** 2026-02-28 01:11:05.049045 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.049050 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.049055 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.049059 | orchestrator | 2026-02-28 01:11:05.049065 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-28 01:11:05.049070 | orchestrator | Saturday 28 February 2026 01:09:14 +0000 (0:00:00.793) 0:01:23.183 ***** 2026-02-28 01:11:05.049075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:11:05.049087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:11:05.049093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:11:05.049099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.049105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.049110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.049121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.049136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.049147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.049193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.049201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.049208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:11:05.049220 | orchestrator | 2026-02-28 01:11:05.049227 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-28 01:11:05.049233 | orchestrator | Saturday 28 February 2026 01:09:18 +0000 (0:00:04.405) 0:01:27.588 ***** 2026-02-28 01:11:05.049240 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.049246 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.049253 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.049260 | orchestrator | 2026-02-28 01:11:05.049354 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-28 01:11:05.049364 | orchestrator | Saturday 28 February 2026 01:09:19 +0000 (0:00:00.894) 0:01:28.482 ***** 2026-02-28 01:11:05.049371 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.049376 | orchestrator | 2026-02-28 01:11:05.049383 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-28 01:11:05.049389 | orchestrator | Saturday 28 February 2026 01:09:21 +0000 (0:00:02.356) 0:01:30.839 ***** 2026-02-28 01:11:05.049396 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.049403 | orchestrator | 2026-02-28 01:11:05.049410 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-28 01:11:05.049482 | orchestrator | Saturday 28 February 2026 01:09:24 +0000 (0:00:02.610) 0:01:33.450 ***** 2026-02-28 01:11:05.049489 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.049493 | orchestrator | 2026-02-28 01:11:05.049497 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-28 01:11:05.049501 | orchestrator | Saturday 28 February 2026 01:09:49 +0000 (0:00:24.912) 0:01:58.362 ***** 2026-02-28 01:11:05.049506 | orchestrator | 2026-02-28 01:11:05.049510 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-28 01:11:05.049514 | orchestrator | Saturday 28 February 2026 01:09:49 +0000 (0:00:00.073) 0:01:58.436 ***** 2026-02-28 01:11:05.049518 | orchestrator | 2026-02-28 01:11:05.049522 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-28 01:11:05.049527 | orchestrator | Saturday 28 February 2026 01:09:49 +0000 (0:00:00.074) 0:01:58.510 ***** 2026-02-28 01:11:05.049531 | orchestrator | 2026-02-28 01:11:05.049535 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-28 01:11:05.049539 | orchestrator | Saturday 28 February 2026 01:09:49 +0000 (0:00:00.086) 0:01:58.596 ***** 2026-02-28 01:11:05.049543 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.049547 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:05.049552 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:05.049556 | orchestrator | 2026-02-28 01:11:05.049576 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-28 01:11:05.049580 | orchestrator | Saturday 28 February 2026 01:10:18 +0000 (0:00:28.592) 0:02:27.189 ***** 2026-02-28 01:11:05.049584 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.049589 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:05.049593 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:05.049597 | orchestrator | 2026-02-28 01:11:05.049602 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-28 01:11:05.049606 | orchestrator | Saturday 28 February 2026 01:10:23 +0000 (0:00:05.840) 0:02:33.030 ***** 2026-02-28 01:11:05.049610 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.049614 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:05.049618 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:05.049622 | orchestrator | 2026-02-28 01:11:05.049626 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-28 01:11:05.049664 | orchestrator | Saturday 28 February 2026 01:10:53 +0000 (0:00:29.157) 0:03:02.188 ***** 2026-02-28 01:11:05.049670 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:05.049674 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:05.049678 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.049683 | orchestrator | 2026-02-28 01:11:05.049687 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-28 01:11:05.049692 | orchestrator | Saturday 28 February 2026 01:11:02 +0000 (0:00:09.183) 0:03:11.371 ***** 2026-02-28 01:11:05.049696 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.049700 | orchestrator | 2026-02-28 01:11:05.049708 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:11:05.049713 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 01:11:05.049719 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:11:05.049723 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:11:05.049727 | orchestrator | 2026-02-28 01:11:05.049731 | orchestrator | 2026-02-28 01:11:05.049736 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:11:05.049740 | orchestrator | Saturday 28 February 2026 01:11:02 +0000 (0:00:00.270) 0:03:11.642 ***** 2026-02-28 01:11:05.049744 | orchestrator | =============================================================================== 2026-02-28 01:11:05.049748 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 29.16s 2026-02-28 01:11:05.049752 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 28.59s 2026-02-28 01:11:05.049756 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 24.91s 2026-02-28 01:11:05.049761 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 16.17s 2026-02-28 01:11:05.049765 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 9.18s 2026-02-28 01:11:05.049769 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.92s 2026-02-28 01:11:05.049773 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.84s 2026-02-28 01:11:05.049777 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.60s 2026-02-28 01:11:05.049782 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.40s 2026-02-28 01:11:05.049786 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.92s 2026-02-28 01:11:05.049790 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.41s 2026-02-28 01:11:05.049794 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.17s 2026-02-28 01:11:05.049798 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.86s 2026-02-28 01:11:05.049803 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.81s 2026-02-28 01:11:05.049807 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.03s 2026-02-28 01:11:05.049811 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.99s 2026-02-28 01:11:05.049815 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 2.98s 2026-02-28 01:11:05.049823 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.89s 2026-02-28 01:11:05.049827 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.61s 2026-02-28 01:11:05.049832 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.56s 2026-02-28 01:11:05.050096 | orchestrator | 2026-02-28 01:11:05.050109 | orchestrator | 2026-02-28 01:11:05.050113 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:11:05.050124 | orchestrator | 2026-02-28 01:11:05.050129 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:11:05.050133 | orchestrator | Saturday 28 February 2026 01:07:51 +0000 (0:00:00.527) 0:00:00.527 ***** 2026-02-28 01:11:05.050137 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:11:05.050142 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:11:05.050146 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:11:05.050150 | orchestrator | 2026-02-28 01:11:05.050154 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:11:05.050159 | orchestrator | Saturday 28 February 2026 01:07:51 +0000 (0:00:00.521) 0:00:01.048 ***** 2026-02-28 01:11:05.050163 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-28 01:11:05.050167 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-28 01:11:05.050172 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-28 01:11:05.050177 | orchestrator | 2026-02-28 01:11:05.050181 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-28 01:11:05.050185 | orchestrator | 2026-02-28 01:11:05.050189 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-28 01:11:05.050193 | orchestrator | Saturday 28 February 2026 01:07:52 +0000 (0:00:00.675) 0:00:01.724 ***** 2026-02-28 01:11:05.050198 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:11:05.050203 | orchestrator | 2026-02-28 01:11:05.050207 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-28 01:11:05.050211 | orchestrator | Saturday 28 February 2026 01:07:53 +0000 (0:00:00.695) 0:00:02.420 ***** 2026-02-28 01:11:05.050216 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-28 01:11:05.050220 | orchestrator | 2026-02-28 01:11:05.050224 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-28 01:11:05.050228 | orchestrator | Saturday 28 February 2026 01:07:57 +0000 (0:00:04.351) 0:00:06.772 ***** 2026-02-28 01:11:05.050232 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-28 01:11:05.050237 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-28 01:11:05.050241 | orchestrator | 2026-02-28 01:11:05.050249 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-28 01:11:05.050254 | orchestrator | Saturday 28 February 2026 01:08:03 +0000 (0:00:05.783) 0:00:12.556 ***** 2026-02-28 01:11:05.050258 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:11:05.050263 | orchestrator | 2026-02-28 01:11:05.050267 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-28 01:11:05.050271 | orchestrator | Saturday 28 February 2026 01:08:06 +0000 (0:00:02.928) 0:00:15.484 ***** 2026-02-28 01:11:05.050275 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:11:05.050280 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-28 01:11:05.050284 | orchestrator | 2026-02-28 01:11:05.050288 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-28 01:11:05.050292 | orchestrator | Saturday 28 February 2026 01:08:09 +0000 (0:00:03.723) 0:00:19.207 ***** 2026-02-28 01:11:05.050297 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:11:05.050302 | orchestrator | 2026-02-28 01:11:05.050306 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-28 01:11:05.050310 | orchestrator | Saturday 28 February 2026 01:08:13 +0000 (0:00:03.780) 0:00:22.988 ***** 2026-02-28 01:11:05.050314 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-28 01:11:05.050319 | orchestrator | 2026-02-28 01:11:05.050323 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-28 01:11:05.050327 | orchestrator | Saturday 28 February 2026 01:08:18 +0000 (0:00:04.400) 0:00:27.388 ***** 2026-02-28 01:11:05.050343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:11:05.050353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:11:05.050358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:11:05.050367 | orchestrator | 2026-02-28 01:11:05.050371 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-28 01:11:05.050375 | orchestrator | Saturday 28 February 2026 01:08:23 +0000 (0:00:05.613) 0:00:33.002 ***** 2026-02-28 01:11:05.050380 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:11:05.050384 | orchestrator | 2026-02-28 01:11:05.050391 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-28 01:11:05.050395 | orchestrator | Saturday 28 February 2026 01:08:24 +0000 (0:00:00.634) 0:00:33.636 ***** 2026-02-28 01:11:05.050399 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:05.050404 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.050408 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:05.050412 | orchestrator | 2026-02-28 01:11:05.050416 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-28 01:11:05.050420 | orchestrator | Saturday 28 February 2026 01:08:27 +0000 (0:00:03.605) 0:00:37.242 ***** 2026-02-28 01:11:05.050425 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:11:05.050429 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:11:05.050433 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:11:05.050438 | orchestrator | 2026-02-28 01:11:05.050442 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-28 01:11:05.050446 | orchestrator | Saturday 28 February 2026 01:08:29 +0000 (0:00:01.364) 0:00:38.606 ***** 2026-02-28 01:11:05.050451 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:11:05.050455 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:11:05.050459 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:11:05.050463 | orchestrator | 2026-02-28 01:11:05.050467 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-28 01:11:05.050472 | orchestrator | Saturday 28 February 2026 01:08:30 +0000 (0:00:01.194) 0:00:39.800 ***** 2026-02-28 01:11:05.050476 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:11:05.050480 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:11:05.050484 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:11:05.050488 | orchestrator | 2026-02-28 01:11:05.050493 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-28 01:11:05.050497 | orchestrator | Saturday 28 February 2026 01:08:31 +0000 (0:00:00.811) 0:00:40.612 ***** 2026-02-28 01:11:05.050504 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.050508 | orchestrator | 2026-02-28 01:11:05.050513 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-28 01:11:05.050521 | orchestrator | Saturday 28 February 2026 01:08:31 +0000 (0:00:00.122) 0:00:40.734 ***** 2026-02-28 01:11:05.050525 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.050529 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.050534 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.050541 | orchestrator | 2026-02-28 01:11:05.050549 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-28 01:11:05.050555 | orchestrator | Saturday 28 February 2026 01:08:31 +0000 (0:00:00.276) 0:00:41.010 ***** 2026-02-28 01:11:05.050562 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:11:05.050568 | orchestrator | 2026-02-28 01:11:05.050574 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-28 01:11:05.050581 | orchestrator | Saturday 28 February 2026 01:08:32 +0000 (0:00:00.541) 0:00:41.552 ***** 2026-02-28 01:11:05.050592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:11:05.050604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:11:05.050618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:11:05.050625 | orchestrator | 2026-02-28 01:11:05.050632 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-28 01:11:05.050658 | orchestrator | Saturday 28 February 2026 01:08:36 +0000 (0:00:04.718) 0:00:46.270 ***** 2026-02-28 01:11:05.050670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:11:05.050683 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.050695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:11:05.050702 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.050714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:11:05.050721 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.050728 | orchestrator | 2026-02-28 01:11:05.050735 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-28 01:11:05.050743 | orchestrator | Saturday 28 February 2026 01:08:41 +0000 (0:00:04.770) 0:00:51.040 ***** 2026-02-28 01:11:05.050757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:11:05.050772 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.050783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:11:05.050791 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.050803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:11:05.050816 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.050823 | orchestrator | 2026-02-28 01:11:05.050830 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-28 01:11:05.050837 | orchestrator | Saturday 28 February 2026 01:08:46 +0000 (0:00:04.980) 0:00:56.021 ***** 2026-02-28 01:11:05.050845 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.050853 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.050860 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.050868 | orchestrator | 2026-02-28 01:11:05.050873 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-28 01:11:05.050877 | orchestrator | Saturday 28 February 2026 01:08:51 +0000 (0:00:04.945) 0:01:00.966 ***** 2026-02-28 01:11:05.050882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:11:05.050891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:11:05.050907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:11:05.050912 | orchestrator | 2026-02-28 01:11:05.050916 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-28 01:11:05.050921 | orchestrator | Saturday 28 February 2026 01:08:57 +0000 (0:00:06.221) 0:01:07.188 ***** 2026-02-28 01:11:05.050925 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:05.050929 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.050933 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:05.050938 | orchestrator | 2026-02-28 01:11:05.050942 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-28 01:11:05.050946 | orchestrator | Saturday 28 February 2026 01:09:07 +0000 (0:00:09.658) 0:01:16.847 ***** 2026-02-28 01:11:05.050950 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.050954 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.050959 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.050963 | orchestrator | 2026-02-28 01:11:05.050968 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-28 01:11:05.050972 | orchestrator | Saturday 28 February 2026 01:09:12 +0000 (0:00:05.074) 0:01:21.921 ***** 2026-02-28 01:11:05.050976 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.050986 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.050990 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.051018 | orchestrator | 2026-02-28 01:11:05.051024 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-28 01:11:05.051028 | orchestrator | Saturday 28 February 2026 01:09:17 +0000 (0:00:05.227) 0:01:27.149 ***** 2026-02-28 01:11:05.051032 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.051036 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.051041 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.051045 | orchestrator | 2026-02-28 01:11:05.051049 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-28 01:11:05.051054 | orchestrator | Saturday 28 February 2026 01:09:22 +0000 (0:00:04.319) 0:01:31.468 ***** 2026-02-28 01:11:05.051058 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.051062 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.051066 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.051070 | orchestrator | 2026-02-28 01:11:05.051074 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-28 01:11:05.051079 | orchestrator | Saturday 28 February 2026 01:09:25 +0000 (0:00:03.447) 0:01:34.916 ***** 2026-02-28 01:11:05.051083 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.051088 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.051092 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.051096 | orchestrator | 2026-02-28 01:11:05.051100 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-28 01:11:05.051105 | orchestrator | Saturday 28 February 2026 01:09:25 +0000 (0:00:00.313) 0:01:35.229 ***** 2026-02-28 01:11:05.051109 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-28 01:11:05.051114 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.051118 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-28 01:11:05.051122 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.051127 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-28 01:11:05.051131 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.051135 | orchestrator | 2026-02-28 01:11:05.051140 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-28 01:11:05.051144 | orchestrator | Saturday 28 February 2026 01:09:30 +0000 (0:00:04.188) 0:01:39.417 ***** 2026-02-28 01:11:05.051149 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.051156 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:05.051161 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:05.051165 | orchestrator | 2026-02-28 01:11:05.051170 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-28 01:11:05.051174 | orchestrator | Saturday 28 February 2026 01:09:35 +0000 (0:00:05.134) 0:01:44.552 ***** 2026-02-28 01:11:05.051180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:11:05.051197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:11:05.051206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:11:05.051215 | orchestrator | 2026-02-28 01:11:05.051220 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-28 01:11:05.051224 | orchestrator | Saturday 28 February 2026 01:09:39 +0000 (0:00:04.372) 0:01:48.924 ***** 2026-02-28 01:11:05.051228 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:05.051233 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:05.051237 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:05.051241 | orchestrator | 2026-02-28 01:11:05.051246 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-28 01:11:05.051250 | orchestrator | Saturday 28 February 2026 01:09:39 +0000 (0:00:00.354) 0:01:49.278 ***** 2026-02-28 01:11:05.051255 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.051259 | orchestrator | 2026-02-28 01:11:05.051264 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-28 01:11:05.051268 | orchestrator | Saturday 28 February 2026 01:09:42 +0000 (0:00:02.567) 0:01:51.845 ***** 2026-02-28 01:11:05.051272 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.051277 | orchestrator | 2026-02-28 01:11:05.051281 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-28 01:11:05.051285 | orchestrator | Saturday 28 February 2026 01:09:45 +0000 (0:00:02.916) 0:01:54.762 ***** 2026-02-28 01:11:05.051290 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.051294 | orchestrator | 2026-02-28 01:11:05.051298 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-28 01:11:05.051302 | orchestrator | Saturday 28 February 2026 01:09:48 +0000 (0:00:02.536) 0:01:57.299 ***** 2026-02-28 01:11:05.051306 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.051311 | orchestrator | 2026-02-28 01:11:05.051315 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-28 01:11:05.051323 | orchestrator | Saturday 28 February 2026 01:10:27 +0000 (0:00:38.986) 0:02:36.285 ***** 2026-02-28 01:11:05.051327 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.051331 | orchestrator | 2026-02-28 01:11:05.051335 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-28 01:11:05.051340 | orchestrator | Saturday 28 February 2026 01:10:29 +0000 (0:00:02.813) 0:02:39.098 ***** 2026-02-28 01:11:05.051344 | orchestrator | 2026-02-28 01:11:05.051349 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-28 01:11:05.051353 | orchestrator | Saturday 28 February 2026 01:10:30 +0000 (0:00:00.292) 0:02:39.391 ***** 2026-02-28 01:11:05.051357 | orchestrator | 2026-02-28 01:11:05.051361 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-28 01:11:05.051366 | orchestrator | Saturday 28 February 2026 01:10:30 +0000 (0:00:00.068) 0:02:39.460 ***** 2026-02-28 01:11:05.051370 | orchestrator | 2026-02-28 01:11:05.051374 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-28 01:11:05.051378 | orchestrator | Saturday 28 February 2026 01:10:30 +0000 (0:00:00.067) 0:02:39.528 ***** 2026-02-28 01:11:05.051382 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:05.051393 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:05.051397 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:05.051401 | orchestrator | 2026-02-28 01:11:05.051406 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:11:05.051411 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:11:05.051416 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-28 01:11:05.051421 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-28 01:11:05.051425 | orchestrator | 2026-02-28 01:11:05.051429 | orchestrator | 2026-02-28 01:11:05.051433 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:11:05.051441 | orchestrator | Saturday 28 February 2026 01:11:01 +0000 (0:00:31.453) 0:03:10.982 ***** 2026-02-28 01:11:05.051445 | orchestrator | =============================================================================== 2026-02-28 01:11:05.051452 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 38.99s 2026-02-28 01:11:05.051457 | orchestrator | glance : Restart glance-api container ---------------------------------- 31.45s 2026-02-28 01:11:05.051461 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.66s 2026-02-28 01:11:05.051465 | orchestrator | glance : Copying over config.json files for services -------------------- 6.22s 2026-02-28 01:11:05.051470 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.78s 2026-02-28 01:11:05.051474 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.61s 2026-02-28 01:11:05.051478 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.23s 2026-02-28 01:11:05.051483 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.13s 2026-02-28 01:11:05.051487 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.07s 2026-02-28 01:11:05.051491 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.98s 2026-02-28 01:11:05.051496 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.95s 2026-02-28 01:11:05.051500 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.77s 2026-02-28 01:11:05.051504 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.72s 2026-02-28 01:11:05.051508 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.40s 2026-02-28 01:11:05.051512 | orchestrator | glance : Check glance containers ---------------------------------------- 4.37s 2026-02-28 01:11:05.051517 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.35s 2026-02-28 01:11:05.051521 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.32s 2026-02-28 01:11:05.051525 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.19s 2026-02-28 01:11:05.051529 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.78s 2026-02-28 01:11:05.051534 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.72s 2026-02-28 01:11:05.051538 | orchestrator | 2026-02-28 01:11:05 | INFO  | Task 283f538d-6c3f-4e4f-9cc2-ca7d2f0ae84d is in state SUCCESS 2026-02-28 01:11:05.051614 | orchestrator | 2026-02-28 01:11:05 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:05.051621 | orchestrator | 2026-02-28 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:08.093146 | orchestrator | 2026-02-28 01:11:08 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:08.097522 | orchestrator | 2026-02-28 01:11:08 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:11:08.097625 | orchestrator | 2026-02-28 01:11:08 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:08.099047 | orchestrator | 2026-02-28 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:11.143321 | orchestrator | 2026-02-28 01:11:11 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:11.146753 | orchestrator | 2026-02-28 01:11:11 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:11:11.150082 | orchestrator | 2026-02-28 01:11:11 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:11.150151 | orchestrator | 2026-02-28 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:14.226415 | orchestrator | 2026-02-28 01:11:14 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:14.228144 | orchestrator | 2026-02-28 01:11:14 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:11:14.229410 | orchestrator | 2026-02-28 01:11:14 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:14.229461 | orchestrator | 2026-02-28 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:17.265536 | orchestrator | 2026-02-28 01:11:17 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:17.266330 | orchestrator | 2026-02-28 01:11:17 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:11:17.268919 | orchestrator | 2026-02-28 01:11:17 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:17.268963 | orchestrator | 2026-02-28 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:20.309583 | orchestrator | 2026-02-28 01:11:20 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:20.309988 | orchestrator | 2026-02-28 01:11:20 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:11:20.311049 | orchestrator | 2026-02-28 01:11:20 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:20.311099 | orchestrator | 2026-02-28 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:23.351249 | orchestrator | 2026-02-28 01:11:23 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:23.353018 | orchestrator | 2026-02-28 01:11:23 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:11:23.353620 | orchestrator | 2026-02-28 01:11:23 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:23.353657 | orchestrator | 2026-02-28 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:26.388384 | orchestrator | 2026-02-28 01:11:26 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:26.390784 | orchestrator | 2026-02-28 01:11:26 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:11:26.391749 | orchestrator | 2026-02-28 01:11:26 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:26.391773 | orchestrator | 2026-02-28 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:29.440090 | orchestrator | 2026-02-28 01:11:29 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:29.441927 | orchestrator | 2026-02-28 01:11:29 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:11:29.443886 | orchestrator | 2026-02-28 01:11:29 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:29.443920 | orchestrator | 2026-02-28 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:32.489079 | orchestrator | 2026-02-28 01:11:32 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:32.489445 | orchestrator | 2026-02-28 01:11:32 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:11:32.490500 | orchestrator | 2026-02-28 01:11:32 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:32.490567 | orchestrator | 2026-02-28 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:35.525026 | orchestrator | 2026-02-28 01:11:35 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:35.525856 | orchestrator | 2026-02-28 01:11:35 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:11:35.526917 | orchestrator | 2026-02-28 01:11:35 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:35.526956 | orchestrator | 2026-02-28 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:38.558388 | orchestrator | 2026-02-28 01:11:38 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:38.560768 | orchestrator | 2026-02-28 01:11:38 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:11:38.560832 | orchestrator | 2026-02-28 01:11:38 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:38.560841 | orchestrator | 2026-02-28 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:41.600220 | orchestrator | 2026-02-28 01:11:41 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:41.602470 | orchestrator | 2026-02-28 01:11:41 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:11:41.604558 | orchestrator | 2026-02-28 01:11:41 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:41.604663 | orchestrator | 2026-02-28 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:44.644329 | orchestrator | 2026-02-28 01:11:44 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:44.644571 | orchestrator | 2026-02-28 01:11:44 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state STARTED 2026-02-28 01:11:44.645685 | orchestrator | 2026-02-28 01:11:44 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:44.645727 | orchestrator | 2026-02-28 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:47.685483 | orchestrator | 2026-02-28 01:11:47 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:47.688194 | orchestrator | 2026-02-28 01:11:47 | INFO  | Task 9113417b-54ac-4c41-bfa2-65bd97e188ac is in state SUCCESS 2026-02-28 01:11:47.690647 | orchestrator | 2026-02-28 01:11:47.690734 | orchestrator | 2026-02-28 01:11:47.690821 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:11:47.690841 | orchestrator | 2026-02-28 01:11:47.690878 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:11:47.690896 | orchestrator | Saturday 28 February 2026 01:08:53 +0000 (0:00:00.368) 0:00:00.368 ***** 2026-02-28 01:11:47.690911 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:11:47.690925 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:11:47.690939 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:11:47.690954 | orchestrator | 2026-02-28 01:11:47.691052 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:11:47.691062 | orchestrator | Saturday 28 February 2026 01:08:53 +0000 (0:00:00.654) 0:00:01.022 ***** 2026-02-28 01:11:47.691071 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-28 01:11:47.691080 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-28 01:11:47.691089 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-28 01:11:47.691098 | orchestrator | 2026-02-28 01:11:47.691107 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-28 01:11:47.691243 | orchestrator | 2026-02-28 01:11:47.691258 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-28 01:11:47.691269 | orchestrator | Saturday 28 February 2026 01:08:54 +0000 (0:00:00.887) 0:00:01.910 ***** 2026-02-28 01:11:47.691280 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:11:47.691315 | orchestrator | 2026-02-28 01:11:47.691327 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-28 01:11:47.691376 | orchestrator | Saturday 28 February 2026 01:08:55 +0000 (0:00:01.088) 0:00:02.998 ***** 2026-02-28 01:11:47.691398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:11:47.691419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:11:47.691437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:11:47.691453 | orchestrator | 2026-02-28 01:11:47.691467 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-28 01:11:47.691479 | orchestrator | Saturday 28 February 2026 01:08:56 +0000 (0:00:01.137) 0:00:04.136 ***** 2026-02-28 01:11:47.691489 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-28 01:11:47.691501 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-28 01:11:47.691512 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:11:47.691524 | orchestrator | 2026-02-28 01:11:47.691539 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-28 01:11:47.691554 | orchestrator | Saturday 28 February 2026 01:08:57 +0000 (0:00:00.957) 0:00:05.094 ***** 2026-02-28 01:11:47.691649 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:11:47.691667 | orchestrator | 2026-02-28 01:11:47.691681 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-28 01:11:47.691693 | orchestrator | Saturday 28 February 2026 01:08:58 +0000 (0:00:01.186) 0:00:06.280 ***** 2026-02-28 01:11:47.691743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:11:47.691773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:11:47.691790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:11:47.691805 | orchestrator | 2026-02-28 01:11:47.691819 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-28 01:11:47.691834 | orchestrator | Saturday 28 February 2026 01:09:01 +0000 (0:00:02.389) 0:00:08.669 ***** 2026-02-28 01:11:47.691843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 01:11:47.691854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 01:11:47.691870 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:47.691884 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:47.691917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 01:11:47.691943 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:47.691957 | orchestrator | 2026-02-28 01:11:47.691971 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-28 01:11:47.691984 | orchestrator | Saturday 28 February 2026 01:09:01 +0000 (0:00:00.494) 0:00:09.163 ***** 2026-02-28 01:11:47.692000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 01:11:47.692014 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:47.692028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 01:11:47.692041 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:47.692056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 01:11:47.692069 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:47.692083 | orchestrator | 2026-02-28 01:11:47.692096 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-28 01:11:47.692108 | orchestrator | Saturday 28 February 2026 01:09:03 +0000 (0:00:01.255) 0:00:10.418 ***** 2026-02-28 01:11:47.692121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:11:47.692144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:11:47.692175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:11:47.692190 | orchestrator | 2026-02-28 01:11:47.692204 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-28 01:11:47.692218 | orchestrator | Saturday 28 February 2026 01:09:04 +0000 (0:00:01.802) 0:00:12.221 ***** 2026-02-28 01:11:47.692231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:11:47.692245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:11:47.692259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:11:47.692273 | orchestrator | 2026-02-28 01:11:47.692287 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-28 01:11:47.692302 | orchestrator | Saturday 28 February 2026 01:09:06 +0000 (0:00:01.929) 0:00:14.151 ***** 2026-02-28 01:11:47.692316 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:47.692330 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:47.692345 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:47.692360 | orchestrator | 2026-02-28 01:11:47.692374 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-28 01:11:47.692389 | orchestrator | Saturday 28 February 2026 01:09:07 +0000 (0:00:00.814) 0:00:14.966 ***** 2026-02-28 01:11:47.692415 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-28 01:11:47.692431 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-28 01:11:47.692447 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-28 01:11:47.692462 | orchestrator | 2026-02-28 01:11:47.692476 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-28 01:11:47.692489 | orchestrator | Saturday 28 February 2026 01:09:09 +0000 (0:00:02.117) 0:00:17.083 ***** 2026-02-28 01:11:47.692499 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-28 01:11:47.692517 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-28 01:11:47.692533 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-28 01:11:47.692542 | orchestrator | 2026-02-28 01:11:47.692551 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-28 01:11:47.692560 | orchestrator | Saturday 28 February 2026 01:09:11 +0000 (0:00:01.846) 0:00:18.929 ***** 2026-02-28 01:11:47.692569 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:11:47.692578 | orchestrator | 2026-02-28 01:11:47.692586 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-28 01:11:47.692595 | orchestrator | Saturday 28 February 2026 01:09:12 +0000 (0:00:01.147) 0:00:20.077 ***** 2026-02-28 01:11:47.692604 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-28 01:11:47.692659 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-28 01:11:47.692668 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:11:47.692677 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:11:47.692686 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:11:47.692695 | orchestrator | 2026-02-28 01:11:47.692704 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-28 01:11:47.692712 | orchestrator | Saturday 28 February 2026 01:09:13 +0000 (0:00:01.039) 0:00:21.116 ***** 2026-02-28 01:11:47.692721 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:47.692731 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:47.692739 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:47.692749 | orchestrator | 2026-02-28 01:11:47.692758 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-28 01:11:47.692767 | orchestrator | Saturday 28 February 2026 01:09:14 +0000 (0:00:00.927) 0:00:22.043 ***** 2026-02-28 01:11:47.692777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1086455, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1751258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1086455, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1751258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1086455, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1751258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1086491, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.186126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1086491, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.186126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1086491, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.186126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1086461, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1768005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1086461, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1768005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1086461, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1768005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1086493, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.189126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1086493, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.189126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1086493, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.189126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1086474, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1811259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1086474, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1811259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1086474, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1811259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1086483, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1841333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1086483, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1841333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.692999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1086483, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1841333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.693013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1086454, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1739736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.693028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1086454, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1739736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.693043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1086454, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1739736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.693068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1086458, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1751258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.693083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1086458, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1751258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.693815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1086458, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1751258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.693879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1086462, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1772206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.693896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1086462, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1772206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.693912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1086462, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1772206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.693943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1086478, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1822827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.693957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1086478, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1822827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.693971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1086478, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1822827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1086486, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.186029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1086486, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.186029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1086486, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.186029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1086459, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1761544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1086459, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1761544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1086459, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1761544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1086481, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.183126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1086481, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.183126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1086481, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.183126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1086476, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1811259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1086476, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1811259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1086476, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1811259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1086470, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.180126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1086470, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.180126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1086470, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.180126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1086467, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1791258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1086467, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1791258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1086467, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1791258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1086479, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.183126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1086479, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.183126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1086479, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.183126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1086465, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1781259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1086465, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1781259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1086465, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1781259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1086484, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1848993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1086484, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1848993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1086484, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1848993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1086607, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.241127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1086607, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.241127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1086607, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.241127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1086504, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2035005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1086504, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2035005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1086504, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2035005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1086501, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.192126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1086501, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.192126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1086501, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.192126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1086521, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2081263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1086521, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2081263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1086521, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2081263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1086498, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1910815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1086498, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1910815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1086498, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1910815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1086544, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2201266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1086544, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2201266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1086544, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2201266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1086522, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2171266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1086522, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2171266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1086522, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2171266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1086546, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.221343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1086546, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.221343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1086546, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.221343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1086599, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.236127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1086599, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.236127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1086599, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.236127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1086541, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2190578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1086541, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2190578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1086541, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2190578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.694993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1086515, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2055702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1086515, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2055702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1086503, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1961262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1086515, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2055702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1086503, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1961262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1086514, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2041264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1086503, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1961262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1086514, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2041264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1086502, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1941261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1086514, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2041264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1086502, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1941261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1086519, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2071264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1086502, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1941261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1086519, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2071264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1086555, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.235127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1086519, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2071264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1086555, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.235127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1086550, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.222894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1086555, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.235127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1086550, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.222894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1086499, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.191126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1086550, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.222894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1086499, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.191126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1086500, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1917849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1086499, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.191126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1086500, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1917849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1086536, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2182841, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1086500, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.1917849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1086536, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2182841, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1086548, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.221831, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1086536, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.2182841, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1086548, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.221831, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1086548, 'dev': 105, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772237872.221831, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:11:47.695352 | orchestrator | 2026-02-28 01:11:47.695361 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-28 01:11:47.695440 | orchestrator | Saturday 28 February 2026 01:09:55 +0000 (0:00:41.099) 0:01:03.142 ***** 2026-02-28 01:11:47.695450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:11:47.695459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:11:47.695500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:11:47.695510 | orchestrator | 2026-02-28 01:11:47.695518 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-28 01:11:47.695526 | orchestrator | Saturday 28 February 2026 01:09:56 +0000 (0:00:01.105) 0:01:04.248 ***** 2026-02-28 01:11:47.695534 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:47.695543 | orchestrator | 2026-02-28 01:11:47.695551 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-28 01:11:47.695564 | orchestrator | Saturday 28 February 2026 01:09:59 +0000 (0:00:02.571) 0:01:06.820 ***** 2026-02-28 01:11:47.695573 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:47.695581 | orchestrator | 2026-02-28 01:11:47.695593 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-28 01:11:47.695671 | orchestrator | Saturday 28 February 2026 01:10:02 +0000 (0:00:02.565) 0:01:09.385 ***** 2026-02-28 01:11:47.695691 | orchestrator | 2026-02-28 01:11:47.695705 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-28 01:11:47.695718 | orchestrator | Saturday 28 February 2026 01:10:02 +0000 (0:00:00.102) 0:01:09.487 ***** 2026-02-28 01:11:47.695731 | orchestrator | 2026-02-28 01:11:47.695744 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-28 01:11:47.695757 | orchestrator | Saturday 28 February 2026 01:10:02 +0000 (0:00:00.126) 0:01:09.614 ***** 2026-02-28 01:11:47.695770 | orchestrator | 2026-02-28 01:11:47.695784 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-28 01:11:47.695797 | orchestrator | Saturday 28 February 2026 01:10:02 +0000 (0:00:00.347) 0:01:09.961 ***** 2026-02-28 01:11:47.695809 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:47.695817 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:47.695825 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:11:47.695833 | orchestrator | 2026-02-28 01:11:47.695842 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-28 01:11:47.695850 | orchestrator | Saturday 28 February 2026 01:10:04 +0000 (0:00:02.022) 0:01:11.984 ***** 2026-02-28 01:11:47.695858 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:47.695866 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:47.695874 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-28 01:11:47.695884 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-28 01:11:47.695894 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-28 01:11:47.695907 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-28 01:11:47.695920 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (8 retries left). 2026-02-28 01:11:47.695932 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:11:47.695945 | orchestrator | 2026-02-28 01:11:47.695958 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-28 01:11:47.695972 | orchestrator | Saturday 28 February 2026 01:11:09 +0000 (0:01:04.959) 0:02:16.944 ***** 2026-02-28 01:11:47.695985 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:47.696006 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:11:47.696015 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:11:47.696028 | orchestrator | 2026-02-28 01:11:47.696041 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-28 01:11:47.696054 | orchestrator | Saturday 28 February 2026 01:11:39 +0000 (0:00:30.195) 0:02:47.139 ***** 2026-02-28 01:11:47.696068 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:11:47.696081 | orchestrator | 2026-02-28 01:11:47.696094 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-28 01:11:47.696108 | orchestrator | Saturday 28 February 2026 01:11:42 +0000 (0:00:02.519) 0:02:49.659 ***** 2026-02-28 01:11:47.696117 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:47.696125 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:11:47.696133 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:11:47.696141 | orchestrator | 2026-02-28 01:11:47.696149 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-28 01:11:47.696159 | orchestrator | Saturday 28 February 2026 01:11:42 +0000 (0:00:00.605) 0:02:50.264 ***** 2026-02-28 01:11:47.696172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-28 01:11:47.696186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-28 01:11:47.696198 | orchestrator | 2026-02-28 01:11:47.696210 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-28 01:11:47.696221 | orchestrator | Saturday 28 February 2026 01:11:45 +0000 (0:00:02.724) 0:02:52.989 ***** 2026-02-28 01:11:47.696232 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:11:47.696239 | orchestrator | 2026-02-28 01:11:47.696246 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:11:47.696253 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:11:47.696262 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:11:47.696268 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:11:47.696275 | orchestrator | 2026-02-28 01:11:47.696282 | orchestrator | 2026-02-28 01:11:47.696289 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:11:47.696301 | orchestrator | Saturday 28 February 2026 01:11:46 +0000 (0:00:00.412) 0:02:53.402 ***** 2026-02-28 01:11:47.696313 | orchestrator | =============================================================================== 2026-02-28 01:11:47.696321 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 64.96s 2026-02-28 01:11:47.696327 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 41.10s 2026-02-28 01:11:47.696335 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.20s 2026-02-28 01:11:47.696342 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.72s 2026-02-28 01:11:47.696348 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.57s 2026-02-28 01:11:47.696355 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.57s 2026-02-28 01:11:47.696362 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.52s 2026-02-28 01:11:47.696368 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 2.39s 2026-02-28 01:11:47.696380 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 2.12s 2026-02-28 01:11:47.696387 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.02s 2026-02-28 01:11:47.696394 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.93s 2026-02-28 01:11:47.696401 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.85s 2026-02-28 01:11:47.696408 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.80s 2026-02-28 01:11:47.696415 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.25s 2026-02-28 01:11:47.696421 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.19s 2026-02-28 01:11:47.696428 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.15s 2026-02-28 01:11:47.696435 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.14s 2026-02-28 01:11:47.696441 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.11s 2026-02-28 01:11:47.696448 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.09s 2026-02-28 01:11:47.696455 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 1.04s 2026-02-28 01:11:47.696462 | orchestrator | 2026-02-28 01:11:47 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:47.696469 | orchestrator | 2026-02-28 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:50.732721 | orchestrator | 2026-02-28 01:11:50 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:50.733953 | orchestrator | 2026-02-28 01:11:50 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:50.733986 | orchestrator | 2026-02-28 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:53.769228 | orchestrator | 2026-02-28 01:11:53 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:53.769484 | orchestrator | 2026-02-28 01:11:53 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:53.769518 | orchestrator | 2026-02-28 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:56.805506 | orchestrator | 2026-02-28 01:11:56 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:56.806962 | orchestrator | 2026-02-28 01:11:56 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:56.807014 | orchestrator | 2026-02-28 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:59.851350 | orchestrator | 2026-02-28 01:11:59 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:11:59.853229 | orchestrator | 2026-02-28 01:11:59 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:11:59.853275 | orchestrator | 2026-02-28 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:02.899412 | orchestrator | 2026-02-28 01:12:02 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:12:02.900926 | orchestrator | 2026-02-28 01:12:02 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:12:02.900966 | orchestrator | 2026-02-28 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:05.943718 | orchestrator | 2026-02-28 01:12:05 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:12:05.946330 | orchestrator | 2026-02-28 01:12:05 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:12:05.946412 | orchestrator | 2026-02-28 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:08.997038 | orchestrator | 2026-02-28 01:12:08 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:12:08.999225 | orchestrator | 2026-02-28 01:12:08 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:12:08.999295 | orchestrator | 2026-02-28 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:12:12.049054 | orchestrator | 2026-02-28 01:12:12 | INFO  | Task d66bd899-1f2d-4b7a-ae3c-523290139387 is in state STARTED 2026-02-28 01:12:12.050933 | orchestrator | 2026-02-28 01:12:12 | INFO  | Task 0fc71976-f329-4f13-9274-61d184e47127 is in state STARTED 2026-02-28 01:12:12.050989 | orchestrator | 2026-02-28 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 04:30:27.343545 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-02-28 04:30:27.347063 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-28 04:30:28.123511 | 2026-02-28 04:30:28.123767 | PLAY [Post output play] 2026-02-28 04:30:28.141998 | 2026-02-28 04:30:28.142146 | LOOP [stage-output : Register sources] 2026-02-28 04:30:28.197426 | 2026-02-28 04:30:28.197660 | TASK [stage-output : Check sudo] 2026-02-28 04:30:29.062289 | orchestrator | sudo: a password is required 2026-02-28 04:30:29.237313 | orchestrator | ok: Runtime: 0:00:00.016691 2026-02-28 04:30:29.252607 | 2026-02-28 04:30:29.252795 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-28 04:30:29.292867 | 2026-02-28 04:30:29.293140 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-28 04:30:29.361773 | orchestrator | ok 2026-02-28 04:30:29.371151 | 2026-02-28 04:30:29.371286 | LOOP [stage-output : Ensure target folders exist] 2026-02-28 04:30:29.830932 | orchestrator | ok: "docs" 2026-02-28 04:30:29.831307 | 2026-02-28 04:30:30.082630 | orchestrator | ok: "artifacts" 2026-02-28 04:30:30.326563 | orchestrator | ok: "logs" 2026-02-28 04:30:30.351188 | 2026-02-28 04:30:30.351383 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-28 04:30:30.399484 | 2026-02-28 04:30:30.399939 | TASK [stage-output : Make all log files readable] 2026-02-28 04:30:30.699774 | orchestrator | ok 2026-02-28 04:30:30.709031 | 2026-02-28 04:30:30.709158 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-28 04:30:30.743834 | orchestrator | skipping: Conditional result was False 2026-02-28 04:30:30.758538 | 2026-02-28 04:30:30.758775 | TASK [stage-output : Discover log files for compression] 2026-02-28 04:30:30.783132 | orchestrator | skipping: Conditional result was False 2026-02-28 04:30:30.793427 | 2026-02-28 04:30:30.793567 | LOOP [stage-output : Archive everything from logs] 2026-02-28 04:30:30.833051 | 2026-02-28 04:30:30.833200 | PLAY [Post cleanup play] 2026-02-28 04:30:30.841180 | 2026-02-28 04:30:30.841280 | TASK [Set cloud fact (Zuul deployment)] 2026-02-28 04:30:30.896086 | orchestrator | ok 2026-02-28 04:30:30.906301 | 2026-02-28 04:30:30.906407 | TASK [Set cloud fact (local deployment)] 2026-02-28 04:30:30.929793 | orchestrator | skipping: Conditional result was False 2026-02-28 04:30:30.941196 | 2026-02-28 04:30:30.941316 | TASK [Clean the cloud environment] 2026-02-28 04:30:31.521322 | orchestrator | 2026-02-28 04:30:31 - clean up servers 2026-02-28 04:30:32.485390 | orchestrator | 2026-02-28 04:30:32 - testbed-manager 2026-02-28 04:30:32.611186 | orchestrator | 2026-02-28 04:30:32 - testbed-node-4 2026-02-28 04:30:32.702657 | orchestrator | 2026-02-28 04:30:32 - testbed-node-2 2026-02-28 04:30:32.785239 | orchestrator | 2026-02-28 04:30:32 - testbed-node-0 2026-02-28 04:30:32.891872 | orchestrator | 2026-02-28 04:30:32 - testbed-node-5 2026-02-28 04:30:32.980960 | orchestrator | 2026-02-28 04:30:32 - testbed-node-1 2026-02-28 04:30:33.085900 | orchestrator | 2026-02-28 04:30:33 - testbed-node-3 2026-02-28 04:30:33.183536 | orchestrator | 2026-02-28 04:30:33 - clean up keypairs 2026-02-28 04:30:33.204361 | orchestrator | 2026-02-28 04:30:33 - testbed 2026-02-28 04:30:33.228617 | orchestrator | 2026-02-28 04:30:33 - wait for servers to be gone 2026-02-28 04:30:44.147503 | orchestrator | 2026-02-28 04:30:44 - clean up ports 2026-02-28 04:30:44.335608 | orchestrator | 2026-02-28 04:30:44 - 04d8057d-cfcc-42b8-9d38-3739e58b0055 2026-02-28 04:30:44.594154 | orchestrator | 2026-02-28 04:30:44 - 2b6889cd-e922-4aec-ae2e-9497babb701e 2026-02-28 04:30:44.855831 | orchestrator | 2026-02-28 04:30:44 - 2eab42f0-e1dd-42be-b0bd-4246986dacad 2026-02-28 04:30:45.281481 | orchestrator | 2026-02-28 04:30:45 - 4dbd4aa4-7f48-465f-bec0-00b0b3d7353d 2026-02-28 04:30:45.502596 | orchestrator | 2026-02-28 04:30:45 - 6df801d7-8e8a-4386-8ddb-36e8122465db 2026-02-28 04:30:45.716167 | orchestrator | 2026-02-28 04:30:45 - d7f37536-581d-44bf-9536-9692d539d4b4 2026-02-28 04:30:45.941888 | orchestrator | 2026-02-28 04:30:45 - e469a64e-8712-4606-974d-805d5a4d7b0e 2026-02-28 04:30:46.193975 | orchestrator | 2026-02-28 04:30:46 - clean up volumes 2026-02-28 04:30:46.316194 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-1-node-base 2026-02-28 04:30:46.357211 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-4-node-base 2026-02-28 04:30:46.401108 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-3-node-base 2026-02-28 04:30:46.456333 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-2-node-base 2026-02-28 04:30:46.498161 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-manager-base 2026-02-28 04:30:46.545159 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-5-node-base 2026-02-28 04:30:46.590697 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-0-node-base 2026-02-28 04:30:46.637041 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-0-node-3 2026-02-28 04:30:46.682971 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-3-node-3 2026-02-28 04:30:46.731466 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-5-node-5 2026-02-28 04:30:46.774688 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-4-node-4 2026-02-28 04:30:46.819328 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-1-node-4 2026-02-28 04:30:46.862638 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-6-node-3 2026-02-28 04:30:46.906170 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-2-node-5 2026-02-28 04:30:46.950930 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-8-node-5 2026-02-28 04:30:46.998145 | orchestrator | 2026-02-28 04:30:46 - testbed-volume-7-node-4 2026-02-28 04:30:47.039902 | orchestrator | 2026-02-28 04:30:47 - disconnect routers 2026-02-28 04:30:47.166230 | orchestrator | 2026-02-28 04:30:47 - testbed 2026-02-28 04:30:48.241675 | orchestrator | 2026-02-28 04:30:48 - clean up subnets 2026-02-28 04:30:48.296756 | orchestrator | 2026-02-28 04:30:48 - subnet-testbed-management 2026-02-28 04:30:48.484739 | orchestrator | 2026-02-28 04:30:48 - clean up networks 2026-02-28 04:30:48.677168 | orchestrator | 2026-02-28 04:30:48 - net-testbed-management 2026-02-28 04:30:49.012390 | orchestrator | 2026-02-28 04:30:49 - clean up security groups 2026-02-28 04:30:49.062411 | orchestrator | 2026-02-28 04:30:49 - testbed-management 2026-02-28 04:30:49.187684 | orchestrator | 2026-02-28 04:30:49 - testbed-node 2026-02-28 04:30:49.310358 | orchestrator | 2026-02-28 04:30:49 - clean up floating ips 2026-02-28 04:30:49.345978 | orchestrator | 2026-02-28 04:30:49 - 81.163.192.181 2026-02-28 04:30:49.739995 | orchestrator | 2026-02-28 04:30:49 - clean up routers 2026-02-28 04:30:49.858677 | orchestrator | 2026-02-28 04:30:49 - testbed 2026-02-28 04:30:50.996529 | orchestrator | ok: Runtime: 0:00:19.498549 2026-02-28 04:30:50.998606 | 2026-02-28 04:30:50.998711 | PLAY RECAP 2026-02-28 04:30:50.998771 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-28 04:30:50.998798 | 2026-02-28 04:30:51.146756 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-28 04:30:51.148402 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-28 04:30:51.885602 | 2026-02-28 04:30:51.885790 | PLAY [Cleanup play] 2026-02-28 04:30:51.901717 | 2026-02-28 04:30:51.901853 | TASK [Set cloud fact (Zuul deployment)] 2026-02-28 04:30:51.941990 | orchestrator | ok 2026-02-28 04:30:51.948882 | 2026-02-28 04:30:51.949020 | TASK [Set cloud fact (local deployment)] 2026-02-28 04:30:51.973085 | orchestrator | skipping: Conditional result was False 2026-02-28 04:30:51.983048 | 2026-02-28 04:30:51.983171 | TASK [Clean the cloud environment] 2026-02-28 04:30:53.124567 | orchestrator | 2026-02-28 04:30:53 - clean up servers 2026-02-28 04:30:53.741864 | orchestrator | 2026-02-28 04:30:53 - clean up keypairs 2026-02-28 04:30:53.763121 | orchestrator | 2026-02-28 04:30:53 - wait for servers to be gone 2026-02-28 04:30:53.807452 | orchestrator | 2026-02-28 04:30:53 - clean up ports 2026-02-28 04:30:53.879696 | orchestrator | 2026-02-28 04:30:53 - clean up volumes 2026-02-28 04:30:53.962951 | orchestrator | 2026-02-28 04:30:53 - disconnect routers 2026-02-28 04:30:53.986887 | orchestrator | 2026-02-28 04:30:53 - clean up subnets 2026-02-28 04:30:54.018150 | orchestrator | 2026-02-28 04:30:54 - clean up networks 2026-02-28 04:30:54.708676 | orchestrator | 2026-02-28 04:30:54 - clean up security groups 2026-02-28 04:30:54.756794 | orchestrator | 2026-02-28 04:30:54 - clean up floating ips 2026-02-28 04:30:54.784387 | orchestrator | 2026-02-28 04:30:54 - clean up routers 2026-02-28 04:30:55.040764 | orchestrator | ok: Runtime: 0:00:02.049344 2026-02-28 04:30:55.045927 | 2026-02-28 04:30:55.046064 | PLAY RECAP 2026-02-28 04:30:55.046163 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-28 04:30:55.046207 | 2026-02-28 04:30:55.183347 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-28 04:30:55.184465 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-28 04:30:55.939481 | 2026-02-28 04:30:55.939642 | PLAY [Base post-fetch] 2026-02-28 04:30:55.954984 | 2026-02-28 04:30:55.955118 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-28 04:30:56.010585 | orchestrator | skipping: Conditional result was False 2026-02-28 04:30:56.024737 | 2026-02-28 04:30:56.024919 | TASK [fetch-output : Set log path for single node] 2026-02-28 04:30:56.076117 | orchestrator | ok 2026-02-28 04:30:56.086274 | 2026-02-28 04:30:56.086421 | LOOP [fetch-output : Ensure local output dirs] 2026-02-28 04:30:56.576383 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/3d98a54cd50b49fcb9eaae05856417e1/work/logs" 2026-02-28 04:30:56.854950 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/3d98a54cd50b49fcb9eaae05856417e1/work/artifacts" 2026-02-28 04:30:57.114278 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/3d98a54cd50b49fcb9eaae05856417e1/work/docs" 2026-02-28 04:30:57.131725 | 2026-02-28 04:30:57.131886 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-28 04:30:58.036576 | orchestrator | changed: .d..t...... ./ 2026-02-28 04:30:58.037005 | orchestrator | changed: All items complete 2026-02-28 04:30:58.037075 | 2026-02-28 04:30:58.772998 | orchestrator | changed: .d..t...... ./ 2026-02-28 04:30:59.507485 | orchestrator | changed: .d..t...... ./ 2026-02-28 04:30:59.538967 | 2026-02-28 04:30:59.539118 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-28 04:30:59.578310 | orchestrator | skipping: Conditional result was False 2026-02-28 04:30:59.581212 | orchestrator | skipping: Conditional result was False 2026-02-28 04:30:59.601164 | 2026-02-28 04:30:59.601297 | PLAY RECAP 2026-02-28 04:30:59.601378 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-28 04:30:59.601421 | 2026-02-28 04:30:59.727133 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-28 04:30:59.729589 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-28 04:31:00.469583 | 2026-02-28 04:31:00.469797 | PLAY [Base post] 2026-02-28 04:31:00.484870 | 2026-02-28 04:31:00.485012 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-28 04:31:01.452489 | orchestrator | changed 2026-02-28 04:31:01.462028 | 2026-02-28 04:31:01.462151 | PLAY RECAP 2026-02-28 04:31:01.462220 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-28 04:31:01.462355 | 2026-02-28 04:31:01.590971 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-28 04:31:01.592617 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-28 04:31:02.383018 | 2026-02-28 04:31:02.383201 | PLAY [Base post-logs] 2026-02-28 04:31:02.393916 | 2026-02-28 04:31:02.394054 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-28 04:31:02.859833 | localhost | changed 2026-02-28 04:31:02.878484 | 2026-02-28 04:31:02.878688 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-28 04:31:02.917490 | localhost | ok 2026-02-28 04:31:02.924904 | 2026-02-28 04:31:02.925254 | TASK [Set zuul-log-path fact] 2026-02-28 04:31:02.942453 | localhost | ok 2026-02-28 04:31:02.954501 | 2026-02-28 04:31:02.954626 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-28 04:31:02.980784 | localhost | ok 2026-02-28 04:31:02.985120 | 2026-02-28 04:31:02.985245 | TASK [upload-logs : Create log directories] 2026-02-28 04:31:03.473000 | localhost | changed 2026-02-28 04:31:03.477472 | 2026-02-28 04:31:03.477636 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-28 04:31:04.011496 | localhost -> localhost | ok: Runtime: 0:00:00.007008 2026-02-28 04:31:04.021231 | 2026-02-28 04:31:04.021459 | TASK [upload-logs : Upload logs to log server] 2026-02-28 04:31:04.638400 | localhost | Output suppressed because no_log was given 2026-02-28 04:31:04.642297 | 2026-02-28 04:31:04.642491 | LOOP [upload-logs : Compress console log and json output] 2026-02-28 04:31:04.702340 | localhost | skipping: Conditional result was False 2026-02-28 04:31:04.707404 | localhost | skipping: Conditional result was False 2026-02-28 04:31:04.716615 | 2026-02-28 04:31:04.717071 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-28 04:31:04.764720 | localhost | skipping: Conditional result was False 2026-02-28 04:31:04.765453 | 2026-02-28 04:31:04.769638 | localhost | skipping: Conditional result was False 2026-02-28 04:31:04.782745 | 2026-02-28 04:31:04.783009 | LOOP [upload-logs : Upload console log and json output]