2026-03-19 00:00:06.919695 | Job console starting 2026-03-19 00:00:06.942496 | Updating git repos 2026-03-19 00:00:07.234948 | Cloning repos into workspace 2026-03-19 00:00:07.494225 | Restoring repo states 2026-03-19 00:00:07.518765 | Merging changes 2026-03-19 00:00:07.518788 | Checking out repos 2026-03-19 00:00:08.113183 | Preparing playbooks 2026-03-19 00:00:09.119887 | Running Ansible setup 2026-03-19 00:00:15.222250 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-19 00:00:16.617537 | 2026-03-19 00:00:16.617715 | PLAY [Base pre] 2026-03-19 00:00:16.644273 | 2026-03-19 00:00:16.644394 | TASK [Setup log path fact] 2026-03-19 00:00:16.687461 | orchestrator | ok 2026-03-19 00:00:16.710952 | 2026-03-19 00:00:16.711096 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-19 00:00:16.743958 | orchestrator | ok 2026-03-19 00:00:16.759931 | 2026-03-19 00:00:16.760047 | TASK [emit-job-header : Print job information] 2026-03-19 00:00:16.805185 | # Job Information 2026-03-19 00:00:16.805339 | Ansible Version: 2.16.14 2026-03-19 00:00:16.805375 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-19 00:00:16.805409 | Pipeline: periodic-midnight 2026-03-19 00:00:16.805432 | Executor: 521e9411259a 2026-03-19 00:00:16.805453 | Triggered by: https://github.com/osism/testbed 2026-03-19 00:00:16.805475 | Event ID: 85a5e9a9786347a19925be0ce68d76d6 2026-03-19 00:00:16.811877 | 2026-03-19 00:00:16.811975 | LOOP [emit-job-header : Print node information] 2026-03-19 00:00:17.133569 | orchestrator | ok: 2026-03-19 00:00:17.133863 | orchestrator | # Node Information 2026-03-19 00:00:17.133904 | orchestrator | Inventory Hostname: orchestrator 2026-03-19 00:00:17.133931 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-19 00:00:17.133953 | orchestrator | Username: zuul-testbed01 2026-03-19 00:00:17.133975 | orchestrator | Distro: Debian 12.13 2026-03-19 00:00:17.133999 | orchestrator | Provider: static-testbed 2026-03-19 00:00:17.134019 | orchestrator | Region: 2026-03-19 00:00:17.134041 | orchestrator | Label: testbed-orchestrator 2026-03-19 00:00:17.134061 | orchestrator | Product Name: OpenStack Nova 2026-03-19 00:00:17.134080 | orchestrator | Interface IP: 81.163.193.140 2026-03-19 00:00:17.168661 | 2026-03-19 00:00:17.168779 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-19 00:00:18.942097 | orchestrator -> localhost | changed 2026-03-19 00:00:18.957995 | 2026-03-19 00:00:18.958110 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-19 00:00:22.766588 | orchestrator -> localhost | changed 2026-03-19 00:00:22.789891 | 2026-03-19 00:00:22.790010 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-19 00:00:23.589601 | orchestrator -> localhost | ok 2026-03-19 00:00:23.598296 | 2026-03-19 00:00:23.598402 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-19 00:00:23.657415 | orchestrator | ok 2026-03-19 00:00:23.695222 | orchestrator | included: /var/lib/zuul/builds/ed649bb755a64e3ab8e84305576b127a/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-19 00:00:23.704962 | 2026-03-19 00:00:23.705057 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-19 00:00:28.157675 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-19 00:00:28.157944 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/ed649bb755a64e3ab8e84305576b127a/work/ed649bb755a64e3ab8e84305576b127a_id_rsa 2026-03-19 00:00:28.158004 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/ed649bb755a64e3ab8e84305576b127a/work/ed649bb755a64e3ab8e84305576b127a_id_rsa.pub 2026-03-19 00:00:28.158046 | orchestrator -> localhost | The key fingerprint is: 2026-03-19 00:00:28.158088 | orchestrator -> localhost | SHA256:CqDDAGZ9xJrmrVlacGPisPgUqEbHjbEvkD75NgHovVg zuul-build-sshkey 2026-03-19 00:00:28.158124 | orchestrator -> localhost | The key's randomart image is: 2026-03-19 00:00:28.158170 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-19 00:00:28.158205 | orchestrator -> localhost | |.o. o. | 2026-03-19 00:00:28.158240 | orchestrator -> localhost | |+ o o | 2026-03-19 00:00:28.158271 | orchestrator -> localhost | |o.+ O | 2026-03-19 00:00:28.158303 | orchestrator -> localhost | |=B.% = | 2026-03-19 00:00:28.158334 | orchestrator -> localhost | |O.%.O . S | 2026-03-19 00:00:28.158372 | orchestrator -> localhost | |oO.E * . | 2026-03-19 00:00:28.158403 | orchestrator -> localhost | |.o= X . | 2026-03-19 00:00:28.158434 | orchestrator -> localhost | | ..O | 2026-03-19 00:00:28.158466 | orchestrator -> localhost | | . . | 2026-03-19 00:00:28.158498 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-19 00:00:28.158591 | orchestrator -> localhost | ok: Runtime: 0:00:02.413528 2026-03-19 00:00:28.177067 | 2026-03-19 00:00:28.177158 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-19 00:00:28.232590 | orchestrator | ok 2026-03-19 00:00:28.262827 | orchestrator | included: /var/lib/zuul/builds/ed649bb755a64e3ab8e84305576b127a/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-19 00:00:28.281666 | 2026-03-19 00:00:28.281770 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-19 00:00:28.314849 | orchestrator | skipping: Conditional result was False 2026-03-19 00:00:28.322215 | 2026-03-19 00:00:28.322309 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-19 00:00:29.035677 | orchestrator | changed 2026-03-19 00:00:29.051794 | 2026-03-19 00:00:29.051902 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-19 00:00:29.343338 | orchestrator | ok 2026-03-19 00:00:29.354996 | 2026-03-19 00:00:29.355100 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-19 00:00:29.874521 | orchestrator | ok 2026-03-19 00:00:29.908857 | 2026-03-19 00:00:29.908959 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-19 00:00:30.357559 | orchestrator | ok 2026-03-19 00:00:30.365631 | 2026-03-19 00:00:30.365726 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-19 00:00:30.407634 | orchestrator | skipping: Conditional result was False 2026-03-19 00:00:30.414573 | 2026-03-19 00:00:30.414664 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-19 00:00:32.009316 | orchestrator -> localhost | changed 2026-03-19 00:00:32.040375 | 2026-03-19 00:00:32.040485 | TASK [add-build-sshkey : Add back temp key] 2026-03-19 00:00:32.898307 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/ed649bb755a64e3ab8e84305576b127a/work/ed649bb755a64e3ab8e84305576b127a_id_rsa (zuul-build-sshkey) 2026-03-19 00:00:32.898527 | orchestrator -> localhost | ok: Runtime: 0:00:00.009150 2026-03-19 00:00:32.906675 | 2026-03-19 00:00:32.906770 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-19 00:00:33.510659 | orchestrator | ok 2026-03-19 00:00:33.521064 | 2026-03-19 00:00:33.521169 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-19 00:00:33.585584 | orchestrator | skipping: Conditional result was False 2026-03-19 00:00:33.816292 | 2026-03-19 00:00:33.822362 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-19 00:00:34.647405 | orchestrator | ok 2026-03-19 00:00:34.659025 | 2026-03-19 00:00:34.659123 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-19 00:00:34.741954 | orchestrator | ok 2026-03-19 00:00:34.759805 | 2026-03-19 00:00:34.759910 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-19 00:00:36.139213 | orchestrator -> localhost | ok 2026-03-19 00:00:36.145977 | 2026-03-19 00:00:36.146065 | TASK [validate-host : Collect information about the host] 2026-03-19 00:00:38.037655 | orchestrator | ok 2026-03-19 00:00:38.064648 | 2026-03-19 00:00:38.064767 | TASK [validate-host : Sanitize hostname] 2026-03-19 00:00:38.154790 | orchestrator | ok 2026-03-19 00:00:38.159331 | 2026-03-19 00:00:38.159412 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-19 00:00:39.709287 | orchestrator -> localhost | changed 2026-03-19 00:00:39.714938 | 2026-03-19 00:00:39.715038 | TASK [validate-host : Collect information about zuul worker] 2026-03-19 00:00:40.509860 | orchestrator | ok 2026-03-19 00:00:40.515641 | 2026-03-19 00:00:40.515735 | TASK [validate-host : Write out all zuul information for each host] 2026-03-19 00:00:41.950477 | orchestrator -> localhost | changed 2026-03-19 00:00:41.967964 | 2026-03-19 00:00:41.968156 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-19 00:00:42.315966 | orchestrator | ok 2026-03-19 00:00:42.323004 | 2026-03-19 00:00:42.323094 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-19 00:02:08.716210 | orchestrator | changed: 2026-03-19 00:02:08.716451 | orchestrator | .d..t...... src/ 2026-03-19 00:02:08.716512 | orchestrator | .d..t...... src/github.com/ 2026-03-19 00:02:08.716548 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-19 00:02:08.716571 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-19 00:02:08.716592 | orchestrator | RedHat.yml 2026-03-19 00:02:08.731667 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-19 00:02:08.731684 | orchestrator | RedHat.yml 2026-03-19 00:02:08.731736 | orchestrator | = 1.53.0"... 2026-03-19 00:02:20.582188 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-19 00:02:20.601880 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-19 00:02:21.080499 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-19 00:02:21.754656 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-19 00:02:21.820593 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-19 00:02:22.305430 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-19 00:02:22.730750 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-19 00:02:23.623656 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-19 00:02:23.623717 | orchestrator | 2026-03-19 00:02:23.623724 | orchestrator | Providers are signed by their developers. 2026-03-19 00:02:23.623729 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-19 00:02:23.623734 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-19 00:02:23.623742 | orchestrator | 2026-03-19 00:02:23.623747 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-19 00:02:23.623760 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-19 00:02:23.623764 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-19 00:02:23.623768 | orchestrator | you run "tofu init" in the future. 2026-03-19 00:02:23.623772 | orchestrator | 2026-03-19 00:02:23.623777 | orchestrator | OpenTofu has been successfully initialized! 2026-03-19 00:02:23.623780 | orchestrator | 2026-03-19 00:02:23.623784 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-19 00:02:23.623788 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-19 00:02:23.623792 | orchestrator | should now work. 2026-03-19 00:02:23.623796 | orchestrator | 2026-03-19 00:02:23.623800 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-19 00:02:23.623804 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-19 00:02:23.623808 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-19 00:02:23.790972 | orchestrator | Created and switched to workspace "ci"! 2026-03-19 00:02:23.791087 | orchestrator | 2026-03-19 00:02:23.791102 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-19 00:02:23.791115 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-19 00:02:23.791128 | orchestrator | for this configuration. 2026-03-19 00:02:23.985533 | orchestrator | ci.auto.tfvars 2026-03-19 00:02:24.962077 | orchestrator | default_custom.tf 2026-03-19 00:02:33.729551 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-19 00:02:34.934257 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-19 00:02:35.112625 | orchestrator | 2026-03-19 00:02:35.112714 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-19 00:02:35.112727 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-19 00:02:35.112738 | orchestrator | + create 2026-03-19 00:02:35.112747 | orchestrator | <= read (data resources) 2026-03-19 00:02:35.112756 | orchestrator | 2026-03-19 00:02:35.112765 | orchestrator | OpenTofu will perform the following actions: 2026-03-19 00:02:35.112782 | orchestrator | 2026-03-19 00:02:35.112791 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-19 00:02:35.112800 | orchestrator | # (config refers to values not yet known) 2026-03-19 00:02:35.112808 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-19 00:02:35.112816 | orchestrator | + checksum = (known after apply) 2026-03-19 00:02:35.112825 | orchestrator | + created_at = (known after apply) 2026-03-19 00:02:35.112833 | orchestrator | + file = (known after apply) 2026-03-19 00:02:35.112841 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.112880 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.112890 | orchestrator | + min_disk_gb = (known after apply) 2026-03-19 00:02:35.112905 | orchestrator | + min_ram_mb = (known after apply) 2026-03-19 00:02:35.112919 | orchestrator | + most_recent = true 2026-03-19 00:02:35.112933 | orchestrator | + name = (known after apply) 2026-03-19 00:02:35.112946 | orchestrator | + protected = (known after apply) 2026-03-19 00:02:35.112959 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.113002 | orchestrator | + schema = (known after apply) 2026-03-19 00:02:35.113016 | orchestrator | + size_bytes = (known after apply) 2026-03-19 00:02:35.113028 | orchestrator | + tags = (known after apply) 2026-03-19 00:02:35.113042 | orchestrator | + updated_at = (known after apply) 2026-03-19 00:02:35.113054 | orchestrator | } 2026-03-19 00:02:35.113068 | orchestrator | 2026-03-19 00:02:35.113081 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-19 00:02:35.113093 | orchestrator | # (config refers to values not yet known) 2026-03-19 00:02:35.113106 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-19 00:02:35.113119 | orchestrator | + checksum = (known after apply) 2026-03-19 00:02:35.113133 | orchestrator | + created_at = (known after apply) 2026-03-19 00:02:35.113146 | orchestrator | + file = (known after apply) 2026-03-19 00:02:35.113159 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.113173 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.113186 | orchestrator | + min_disk_gb = (known after apply) 2026-03-19 00:02:35.113200 | orchestrator | + min_ram_mb = (known after apply) 2026-03-19 00:02:35.113209 | orchestrator | + most_recent = true 2026-03-19 00:02:35.113218 | orchestrator | + name = (known after apply) 2026-03-19 00:02:35.113225 | orchestrator | + protected = (known after apply) 2026-03-19 00:02:35.113233 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.113241 | orchestrator | + schema = (known after apply) 2026-03-19 00:02:35.113249 | orchestrator | + size_bytes = (known after apply) 2026-03-19 00:02:35.113257 | orchestrator | + tags = (known after apply) 2026-03-19 00:02:35.113264 | orchestrator | + updated_at = (known after apply) 2026-03-19 00:02:35.113272 | orchestrator | } 2026-03-19 00:02:35.113286 | orchestrator | 2026-03-19 00:02:35.113295 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-19 00:02:35.113303 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-19 00:02:35.113311 | orchestrator | + content = (known after apply) 2026-03-19 00:02:35.113319 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-19 00:02:35.113327 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-19 00:02:35.113335 | orchestrator | + content_md5 = (known after apply) 2026-03-19 00:02:35.113343 | orchestrator | + content_sha1 = (known after apply) 2026-03-19 00:02:35.113351 | orchestrator | + content_sha256 = (known after apply) 2026-03-19 00:02:35.113359 | orchestrator | + content_sha512 = (known after apply) 2026-03-19 00:02:35.113366 | orchestrator | + directory_permission = "0777" 2026-03-19 00:02:35.113374 | orchestrator | + file_permission = "0644" 2026-03-19 00:02:35.113382 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-19 00:02:35.113390 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.113398 | orchestrator | } 2026-03-19 00:02:35.113406 | orchestrator | 2026-03-19 00:02:35.113414 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-19 00:02:35.113422 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-19 00:02:35.113429 | orchestrator | + content = (known after apply) 2026-03-19 00:02:35.113437 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-19 00:02:35.113445 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-19 00:02:35.113453 | orchestrator | + content_md5 = (known after apply) 2026-03-19 00:02:35.113461 | orchestrator | + content_sha1 = (known after apply) 2026-03-19 00:02:35.113468 | orchestrator | + content_sha256 = (known after apply) 2026-03-19 00:02:35.113488 | orchestrator | + content_sha512 = (known after apply) 2026-03-19 00:02:35.113496 | orchestrator | + directory_permission = "0777" 2026-03-19 00:02:35.113504 | orchestrator | + file_permission = "0644" 2026-03-19 00:02:35.113521 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-19 00:02:35.113529 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.113537 | orchestrator | } 2026-03-19 00:02:35.113545 | orchestrator | 2026-03-19 00:02:35.113553 | orchestrator | # local_file.inventory will be created 2026-03-19 00:02:35.113561 | orchestrator | + resource "local_file" "inventory" { 2026-03-19 00:02:35.113569 | orchestrator | + content = (known after apply) 2026-03-19 00:02:35.113576 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-19 00:02:35.113584 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-19 00:02:35.113592 | orchestrator | + content_md5 = (known after apply) 2026-03-19 00:02:35.113599 | orchestrator | + content_sha1 = (known after apply) 2026-03-19 00:02:35.113608 | orchestrator | + content_sha256 = (known after apply) 2026-03-19 00:02:35.113616 | orchestrator | + content_sha512 = (known after apply) 2026-03-19 00:02:35.113624 | orchestrator | + directory_permission = "0777" 2026-03-19 00:02:35.113632 | orchestrator | + file_permission = "0644" 2026-03-19 00:02:35.113640 | orchestrator | + filename = "inventory.ci" 2026-03-19 00:02:35.113648 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.113656 | orchestrator | } 2026-03-19 00:02:35.113664 | orchestrator | 2026-03-19 00:02:35.113671 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-19 00:02:35.113679 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-19 00:02:35.113689 | orchestrator | + content = (sensitive value) 2026-03-19 00:02:35.113698 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-19 00:02:35.113708 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-19 00:02:35.113717 | orchestrator | + content_md5 = (known after apply) 2026-03-19 00:02:35.113726 | orchestrator | + content_sha1 = (known after apply) 2026-03-19 00:02:35.113736 | orchestrator | + content_sha256 = (known after apply) 2026-03-19 00:02:35.113745 | orchestrator | + content_sha512 = (known after apply) 2026-03-19 00:02:35.113754 | orchestrator | + directory_permission = "0700" 2026-03-19 00:02:35.113764 | orchestrator | + file_permission = "0600" 2026-03-19 00:02:35.113773 | orchestrator | + filename = ".id_rsa.ci" 2026-03-19 00:02:35.113783 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.113795 | orchestrator | } 2026-03-19 00:02:35.113808 | orchestrator | 2026-03-19 00:02:35.113821 | orchestrator | # null_resource.node_semaphore will be created 2026-03-19 00:02:35.113834 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-19 00:02:35.113847 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.113859 | orchestrator | } 2026-03-19 00:02:35.113868 | orchestrator | 2026-03-19 00:02:35.113879 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-19 00:02:35.113888 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-19 00:02:35.113897 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.113907 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.113916 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.113925 | orchestrator | + image_id = (known after apply) 2026-03-19 00:02:35.113934 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.113943 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-19 00:02:35.113952 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.113962 | orchestrator | + size = 80 2026-03-19 00:02:35.114006 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.114053 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.114063 | orchestrator | } 2026-03-19 00:02:35.114079 | orchestrator | 2026-03-19 00:02:35.114087 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-19 00:02:35.114095 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-19 00:02:35.114103 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.114111 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.114119 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.114134 | orchestrator | + image_id = (known after apply) 2026-03-19 00:02:35.114142 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.114150 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-19 00:02:35.114158 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.114165 | orchestrator | + size = 80 2026-03-19 00:02:35.114173 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.114181 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.114189 | orchestrator | } 2026-03-19 00:02:35.114197 | orchestrator | 2026-03-19 00:02:35.114205 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-19 00:02:35.114213 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-19 00:02:35.114221 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.114229 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.114237 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.114245 | orchestrator | + image_id = (known after apply) 2026-03-19 00:02:35.114253 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.114261 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-19 00:02:35.114268 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.114276 | orchestrator | + size = 80 2026-03-19 00:02:35.114284 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.114292 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.114300 | orchestrator | } 2026-03-19 00:02:35.114308 | orchestrator | 2026-03-19 00:02:35.114315 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-19 00:02:35.114323 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-19 00:02:35.114331 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.114339 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.114347 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.114355 | orchestrator | + image_id = (known after apply) 2026-03-19 00:02:35.114363 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.114371 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-19 00:02:35.114378 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.114386 | orchestrator | + size = 80 2026-03-19 00:02:35.114400 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.114408 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.114417 | orchestrator | } 2026-03-19 00:02:35.114424 | orchestrator | 2026-03-19 00:02:35.114432 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-19 00:02:35.114440 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-19 00:02:35.114448 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.114456 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.114464 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.114472 | orchestrator | + image_id = (known after apply) 2026-03-19 00:02:35.114480 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.114487 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-19 00:02:35.114495 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.114503 | orchestrator | + size = 80 2026-03-19 00:02:35.114511 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.114519 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.114527 | orchestrator | } 2026-03-19 00:02:35.114535 | orchestrator | 2026-03-19 00:02:35.114543 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-19 00:02:35.114551 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-19 00:02:35.114559 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.114566 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.114574 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.114588 | orchestrator | + image_id = (known after apply) 2026-03-19 00:02:35.114596 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.114604 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-19 00:02:35.114612 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.114620 | orchestrator | + size = 80 2026-03-19 00:02:35.114628 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.114636 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.114644 | orchestrator | } 2026-03-19 00:02:35.114652 | orchestrator | 2026-03-19 00:02:35.114660 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-19 00:02:35.114667 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-19 00:02:35.114675 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.114683 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.114691 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.114699 | orchestrator | + image_id = (known after apply) 2026-03-19 00:02:35.114707 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.114715 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-19 00:02:35.114723 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.114731 | orchestrator | + size = 80 2026-03-19 00:02:35.114739 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.114747 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.114754 | orchestrator | } 2026-03-19 00:02:35.114762 | orchestrator | 2026-03-19 00:02:35.114770 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-19 00:02:35.114779 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 00:02:35.114787 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.114795 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.114803 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.114811 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.114819 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-19 00:02:35.114833 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.114842 | orchestrator | + size = 20 2026-03-19 00:02:35.114850 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.114858 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.114884 | orchestrator | } 2026-03-19 00:02:35.114892 | orchestrator | 2026-03-19 00:02:35.114900 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-19 00:02:35.114908 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 00:02:35.114916 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.114924 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.114932 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.114940 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.114948 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-19 00:02:35.114955 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.114963 | orchestrator | + size = 20 2026-03-19 00:02:35.115003 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.115011 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.115019 | orchestrator | } 2026-03-19 00:02:35.115027 | orchestrator | 2026-03-19 00:02:35.115035 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-19 00:02:35.115043 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 00:02:35.115051 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.115059 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.115067 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.115075 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.115083 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-19 00:02:35.115090 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.115104 | orchestrator | + size = 20 2026-03-19 00:02:35.115112 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.115119 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.115127 | orchestrator | } 2026-03-19 00:02:35.115135 | orchestrator | 2026-03-19 00:02:35.115143 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-19 00:02:35.115151 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 00:02:35.115159 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.115166 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.115174 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.115187 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.115195 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-19 00:02:35.115203 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.115211 | orchestrator | + size = 20 2026-03-19 00:02:35.115219 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.115227 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.115234 | orchestrator | } 2026-03-19 00:02:35.115242 | orchestrator | 2026-03-19 00:02:35.115250 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-19 00:02:35.115258 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 00:02:35.115266 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.115274 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.115281 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.115289 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.115297 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-19 00:02:35.115305 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.115313 | orchestrator | + size = 20 2026-03-19 00:02:35.115321 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.115328 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.115336 | orchestrator | } 2026-03-19 00:02:35.115344 | orchestrator | 2026-03-19 00:02:35.115352 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-19 00:02:35.115360 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 00:02:35.115368 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.115376 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.115384 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.115391 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.115399 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-19 00:02:35.115407 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.115415 | orchestrator | + size = 20 2026-03-19 00:02:35.115423 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.115431 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.115438 | orchestrator | } 2026-03-19 00:02:35.115446 | orchestrator | 2026-03-19 00:02:35.115454 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-19 00:02:35.115462 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 00:02:35.115470 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.115478 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.115486 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.115493 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.115501 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-19 00:02:35.115509 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.115517 | orchestrator | + size = 20 2026-03-19 00:02:35.115525 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.115532 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.115540 | orchestrator | } 2026-03-19 00:02:35.115548 | orchestrator | 2026-03-19 00:02:35.115556 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-19 00:02:35.115564 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 00:02:35.115577 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.115585 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.115593 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.115601 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.115609 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-19 00:02:35.115616 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.115624 | orchestrator | + size = 20 2026-03-19 00:02:35.115632 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.115640 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.115648 | orchestrator | } 2026-03-19 00:02:35.115656 | orchestrator | 2026-03-19 00:02:35.115664 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-19 00:02:35.115672 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 00:02:35.115685 | orchestrator | + attachment = (known after apply) 2026-03-19 00:02:35.115693 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.115701 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.115709 | orchestrator | + metadata = (known after apply) 2026-03-19 00:02:35.115717 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-19 00:02:35.115725 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.115733 | orchestrator | + size = 20 2026-03-19 00:02:35.115741 | orchestrator | + volume_retype_policy = "never" 2026-03-19 00:02:35.115749 | orchestrator | + volume_type = "ssd" 2026-03-19 00:02:35.115757 | orchestrator | } 2026-03-19 00:02:35.115765 | orchestrator | 2026-03-19 00:02:35.115773 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-19 00:02:35.115781 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-19 00:02:35.115788 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-19 00:02:35.115796 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-19 00:02:35.115804 | orchestrator | + all_metadata = (known after apply) 2026-03-19 00:02:35.115812 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.115820 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.115828 | orchestrator | + config_drive = true 2026-03-19 00:02:35.115840 | orchestrator | + created = (known after apply) 2026-03-19 00:02:35.115848 | orchestrator | + flavor_id = (known after apply) 2026-03-19 00:02:35.115856 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-19 00:02:35.115864 | orchestrator | + force_delete = false 2026-03-19 00:02:35.115871 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-19 00:02:35.115879 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.115887 | orchestrator | + image_id = (known after apply) 2026-03-19 00:02:35.115895 | orchestrator | + image_name = (known after apply) 2026-03-19 00:02:35.115903 | orchestrator | + key_pair = "testbed" 2026-03-19 00:02:35.115910 | orchestrator | + name = "testbed-manager" 2026-03-19 00:02:35.115918 | orchestrator | + power_state = "active" 2026-03-19 00:02:35.115926 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.115934 | orchestrator | + security_groups = (known after apply) 2026-03-19 00:02:35.115942 | orchestrator | + stop_before_destroy = false 2026-03-19 00:02:35.115950 | orchestrator | + updated = (known after apply) 2026-03-19 00:02:35.115958 | orchestrator | + user_data = (sensitive value) 2026-03-19 00:02:35.116017 | orchestrator | 2026-03-19 00:02:35.116033 | orchestrator | + block_device { 2026-03-19 00:02:35.116042 | orchestrator | + boot_index = 0 2026-03-19 00:02:35.116050 | orchestrator | + delete_on_termination = false 2026-03-19 00:02:35.116058 | orchestrator | + destination_type = "volume" 2026-03-19 00:02:35.116066 | orchestrator | + multiattach = false 2026-03-19 00:02:35.116074 | orchestrator | + source_type = "volume" 2026-03-19 00:02:35.116082 | orchestrator | + uuid = (known after apply) 2026-03-19 00:02:35.116095 | orchestrator | } 2026-03-19 00:02:35.116104 | orchestrator | 2026-03-19 00:02:35.116112 | orchestrator | + network { 2026-03-19 00:02:35.116119 | orchestrator | + access_network = false 2026-03-19 00:02:35.116127 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-19 00:02:35.116135 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-19 00:02:35.116143 | orchestrator | + mac = (known after apply) 2026-03-19 00:02:35.116151 | orchestrator | + name = (known after apply) 2026-03-19 00:02:35.116158 | orchestrator | + port = (known after apply) 2026-03-19 00:02:35.116166 | orchestrator | + uuid = (known after apply) 2026-03-19 00:02:35.116174 | orchestrator | } 2026-03-19 00:02:35.116182 | orchestrator | } 2026-03-19 00:02:35.116190 | orchestrator | 2026-03-19 00:02:35.116198 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-19 00:02:35.116206 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-19 00:02:35.116213 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-19 00:02:35.116221 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-19 00:02:35.116229 | orchestrator | + all_metadata = (known after apply) 2026-03-19 00:02:35.116237 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.116244 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.116252 | orchestrator | + config_drive = true 2026-03-19 00:02:35.116260 | orchestrator | + created = (known after apply) 2026-03-19 00:02:35.116268 | orchestrator | + flavor_id = (known after apply) 2026-03-19 00:02:35.116276 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-19 00:02:35.116283 | orchestrator | + force_delete = false 2026-03-19 00:02:35.116291 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-19 00:02:35.116299 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.116307 | orchestrator | + image_id = (known after apply) 2026-03-19 00:02:35.116315 | orchestrator | + image_name = (known after apply) 2026-03-19 00:02:35.116335 | orchestrator | + key_pair = "testbed" 2026-03-19 00:02:35.116343 | orchestrator | + name = "testbed-node-0" 2026-03-19 00:02:35.116351 | orchestrator | + power_state = "active" 2026-03-19 00:02:35.116359 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.116367 | orchestrator | + security_groups = (known after apply) 2026-03-19 00:02:35.116375 | orchestrator | + stop_before_destroy = false 2026-03-19 00:02:35.116383 | orchestrator | + updated = (known after apply) 2026-03-19 00:02:35.116391 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-19 00:02:35.116399 | orchestrator | 2026-03-19 00:02:35.116407 | orchestrator | + block_device { 2026-03-19 00:02:35.116415 | orchestrator | + boot_index = 0 2026-03-19 00:02:35.116423 | orchestrator | + delete_on_termination = false 2026-03-19 00:02:35.116431 | orchestrator | + destination_type = "volume" 2026-03-19 00:02:35.116439 | orchestrator | + multiattach = false 2026-03-19 00:02:35.116446 | orchestrator | + source_type = "volume" 2026-03-19 00:02:35.116454 | orchestrator | + uuid = (known after apply) 2026-03-19 00:02:35.116462 | orchestrator | } 2026-03-19 00:02:35.116470 | orchestrator | 2026-03-19 00:02:35.116478 | orchestrator | + network { 2026-03-19 00:02:35.116486 | orchestrator | + access_network = false 2026-03-19 00:02:35.116494 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-19 00:02:35.116501 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-19 00:02:35.116509 | orchestrator | + mac = (known after apply) 2026-03-19 00:02:35.116517 | orchestrator | + name = (known after apply) 2026-03-19 00:02:35.116525 | orchestrator | + port = (known after apply) 2026-03-19 00:02:35.116533 | orchestrator | + uuid = (known after apply) 2026-03-19 00:02:35.116541 | orchestrator | } 2026-03-19 00:02:35.116549 | orchestrator | } 2026-03-19 00:02:35.116557 | orchestrator | 2026-03-19 00:02:35.116570 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-19 00:02:35.116578 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-19 00:02:35.116586 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-19 00:02:35.116599 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-19 00:02:35.116607 | orchestrator | + all_metadata = (known after apply) 2026-03-19 00:02:35.116615 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.116622 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.116630 | orchestrator | + config_drive = true 2026-03-19 00:02:35.116638 | orchestrator | + created = (known after apply) 2026-03-19 00:02:35.116646 | orchestrator | + flavor_id = (known after apply) 2026-03-19 00:02:35.116654 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-19 00:02:35.116662 | orchestrator | + force_delete = false 2026-03-19 00:02:35.116669 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-19 00:02:35.116678 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.116685 | orchestrator | + image_id = (known after apply) 2026-03-19 00:02:35.116693 | orchestrator | + image_name = (known after apply) 2026-03-19 00:02:35.116701 | orchestrator | + key_pair = "testbed" 2026-03-19 00:02:35.116709 | orchestrator | + name = "testbed-node-1" 2026-03-19 00:02:35.116717 | orchestrator | + power_state = "active" 2026-03-19 00:02:35.116725 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.116733 | orchestrator | + security_groups = (known after apply) 2026-03-19 00:02:35.116741 | orchestrator | + stop_before_destroy = false 2026-03-19 00:02:35.116749 | orchestrator | + updated = (known after apply) 2026-03-19 00:02:35.116761 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-19 00:02:35.116770 | orchestrator | 2026-03-19 00:02:35.116778 | orchestrator | + block_device { 2026-03-19 00:02:35.116785 | orchestrator | + boot_index = 0 2026-03-19 00:02:35.116793 | orchestrator | + delete_on_termination = false 2026-03-19 00:02:35.116801 | orchestrator | + destination_type = "volume" 2026-03-19 00:02:35.116809 | orchestrator | + multiattach = false 2026-03-19 00:02:35.116817 | orchestrator | + source_type = "volume" 2026-03-19 00:02:35.116825 | orchestrator | + uuid = (known after apply) 2026-03-19 00:02:35.116832 | orchestrator | } 2026-03-19 00:02:35.116840 | orchestrator | 2026-03-19 00:02:35.116848 | orchestrator | + network { 2026-03-19 00:02:35.116856 | orchestrator | + access_network = false 2026-03-19 00:02:35.116864 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-19 00:02:35.116872 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-19 00:02:35.116880 | orchestrator | + mac = (known after apply) 2026-03-19 00:02:35.116888 | orchestrator | + name = (known after apply) 2026-03-19 00:02:35.116896 | orchestrator | + port = (known after apply) 2026-03-19 00:02:35.116904 | orchestrator | + uuid = (known after apply) 2026-03-19 00:02:35.116912 | orchestrator | } 2026-03-19 00:02:35.116920 | orchestrator | } 2026-03-19 00:02:35.116928 | orchestrator | 2026-03-19 00:02:35.116936 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-19 00:02:35.116943 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-19 00:02:35.116951 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-19 00:02:35.116959 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-19 00:02:35.117010 | orchestrator | + all_metadata = (known after apply) 2026-03-19 00:02:35.117020 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.117028 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.117036 | orchestrator | + config_drive = true 2026-03-19 00:02:35.117044 | orchestrator | + created = (known after apply) 2026-03-19 00:02:35.117052 | orchestrator | + flavor_id = (known after apply) 2026-03-19 00:02:35.117060 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-19 00:02:35.117067 | orchestrator | + force_delete = false 2026-03-19 00:02:35.117075 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-19 00:02:35.117083 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.117091 | orchestrator | + image_id = (known after apply) 2026-03-19 00:02:35.117104 | orchestrator | + image_name = (known after apply) 2026-03-19 00:02:35.117112 | orchestrator | + key_pair = "testbed" 2026-03-19 00:02:35.117119 | orchestrator | + name = "testbed-node-2" 2026-03-19 00:02:35.117126 | orchestrator | + power_state = "active" 2026-03-19 00:02:35.117132 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.117139 | orchestrator | + security_groups = (known after apply) 2026-03-19 00:02:35.117145 | orchestrator | + stop_before_destroy = false 2026-03-19 00:02:35.117152 | orchestrator | + updated = (known after apply) 2026-03-19 00:02:35.117159 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-19 00:02:35.117166 | orchestrator | 2026-03-19 00:02:35.117172 | orchestrator | + block_device { 2026-03-19 00:02:35.117179 | orchestrator | + boot_index = 0 2026-03-19 00:02:35.117185 | orchestrator | + delete_on_termination = false 2026-03-19 00:02:35.117192 | orchestrator | + destination_type = "volume" 2026-03-19 00:02:35.117198 | orchestrator | + multiattach = false 2026-03-19 00:02:35.117205 | orchestrator | + source_type = "volume" 2026-03-19 00:02:35.117212 | orchestrator | + uuid = (known after apply) 2026-03-19 00:02:35.117218 | orchestrator | } 2026-03-19 00:02:35.117225 | orchestrator | 2026-03-19 00:02:35.117231 | orchestrator | + network { 2026-03-19 00:02:35.117238 | orchestrator | + access_network = false 2026-03-19 00:02:35.117245 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-19 00:02:35.117251 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-19 00:02:35.117258 | orchestrator | + mac = (known after apply) 2026-03-19 00:02:35.117264 | orchestrator | + name = (known after apply) 2026-03-19 00:02:35.117271 | orchestrator | + port = (known after apply) 2026-03-19 00:02:35.117277 | orchestrator | + uuid = (known after apply) 2026-03-19 00:02:35.117284 | orchestrator | } 2026-03-19 00:02:35.117291 | orchestrator | } 2026-03-19 00:02:35.117297 | orchestrator | 2026-03-19 00:02:35.117312 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-19 00:02:35.117319 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-19 00:02:35.117326 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-19 00:02:35.117332 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-19 00:02:35.117339 | orchestrator | + all_metadata = (known after apply) 2026-03-19 00:02:35.117345 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.117352 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.117359 | orchestrator | + config_drive = true 2026-03-19 00:02:35.117365 | orchestrator | + created = (known after apply) 2026-03-19 00:02:35.117375 | orchestrator | + flavor_id = (known after apply) 2026-03-19 00:02:35.117382 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-19 00:02:35.117389 | orchestrator | + force_delete = false 2026-03-19 00:02:35.117395 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-19 00:02:35.117402 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.117409 | orchestrator | + image_id = (known after apply) 2026-03-19 00:02:35.117415 | orchestrator | + image_name = (known after apply) 2026-03-19 00:02:35.117422 | orchestrator | + key_pair = "testbed" 2026-03-19 00:02:35.117429 | orchestrator | + name = "testbed-node-3" 2026-03-19 00:02:35.117435 | orchestrator | + power_state = "active" 2026-03-19 00:02:35.117442 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.117449 | orchestrator | + security_groups = (known after apply) 2026-03-19 00:02:35.117455 | orchestrator | + stop_before_destroy = false 2026-03-19 00:02:35.117462 | orchestrator | + updated = (known after apply) 2026-03-19 00:02:35.117469 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-19 00:02:35.117475 | orchestrator | 2026-03-19 00:02:35.117482 | orchestrator | + block_device { 2026-03-19 00:02:35.117489 | orchestrator | + boot_index = 0 2026-03-19 00:02:35.117495 | orchestrator | + delete_on_termination = false 2026-03-19 00:02:35.117502 | orchestrator | + destination_type = "volume" 2026-03-19 00:02:35.117513 | orchestrator | + multiattach = false 2026-03-19 00:02:35.117520 | orchestrator | + source_type = "volume" 2026-03-19 00:02:35.117526 | orchestrator | + uuid = (known after apply) 2026-03-19 00:02:35.117533 | orchestrator | } 2026-03-19 00:02:35.117540 | orchestrator | 2026-03-19 00:02:35.117546 | orchestrator | + network { 2026-03-19 00:02:35.117553 | orchestrator | + access_network = false 2026-03-19 00:02:35.117559 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-19 00:02:35.117566 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-19 00:02:35.117573 | orchestrator | + mac = (known after apply) 2026-03-19 00:02:35.117579 | orchestrator | + name = (known after apply) 2026-03-19 00:02:35.117586 | orchestrator | + port = (known after apply) 2026-03-19 00:02:35.117592 | orchestrator | + uuid = (known after apply) 2026-03-19 00:02:35.117599 | orchestrator | } 2026-03-19 00:02:35.117606 | orchestrator | } 2026-03-19 00:02:35.117612 | orchestrator | 2026-03-19 00:02:35.117619 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-19 00:02:35.117626 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-19 00:02:35.117633 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-19 00:02:35.117640 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-19 00:02:35.117647 | orchestrator | + all_metadata = (known after apply) 2026-03-19 00:02:35.117653 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.117660 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.117666 | orchestrator | + config_drive = true 2026-03-19 00:02:35.117673 | orchestrator | + created = (known after apply) 2026-03-19 00:02:35.117680 | orchestrator | + flavor_id = (known after apply) 2026-03-19 00:02:35.117687 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-19 00:02:35.117693 | orchestrator | + force_delete = false 2026-03-19 00:02:35.117700 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-19 00:02:35.117706 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.117713 | orchestrator | + image_id = (known after apply) 2026-03-19 00:02:35.117719 | orchestrator | + image_name = (known after apply) 2026-03-19 00:02:35.117726 | orchestrator | + key_pair = "testbed" 2026-03-19 00:02:35.117732 | orchestrator | + name = "testbed-node-4" 2026-03-19 00:02:35.117739 | orchestrator | + power_state = "active" 2026-03-19 00:02:35.117746 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.117752 | orchestrator | + security_groups = (known after apply) 2026-03-19 00:02:35.117759 | orchestrator | + stop_before_destroy = false 2026-03-19 00:02:35.117765 | orchestrator | + updated = (known after apply) 2026-03-19 00:02:35.117772 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-19 00:02:35.117779 | orchestrator | 2026-03-19 00:02:35.117786 | orchestrator | + block_device { 2026-03-19 00:02:35.117792 | orchestrator | + boot_index = 0 2026-03-19 00:02:35.117799 | orchestrator | + delete_on_termination = false 2026-03-19 00:02:35.117806 | orchestrator | + destination_type = "volume" 2026-03-19 00:02:35.117812 | orchestrator | + multiattach = false 2026-03-19 00:02:35.117819 | orchestrator | + source_type = "volume" 2026-03-19 00:02:35.117826 | orchestrator | + uuid = (known after apply) 2026-03-19 00:02:35.117833 | orchestrator | } 2026-03-19 00:02:35.117839 | orchestrator | 2026-03-19 00:02:35.117846 | orchestrator | + network { 2026-03-19 00:02:35.117853 | orchestrator | + access_network = false 2026-03-19 00:02:35.117859 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-19 00:02:35.117866 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-19 00:02:35.117872 | orchestrator | + mac = (known after apply) 2026-03-19 00:02:35.117879 | orchestrator | + name = (known after apply) 2026-03-19 00:02:35.117885 | orchestrator | + port = (known after apply) 2026-03-19 00:02:35.117892 | orchestrator | + uuid = (known after apply) 2026-03-19 00:02:35.117899 | orchestrator | } 2026-03-19 00:02:35.117905 | orchestrator | } 2026-03-19 00:02:35.117917 | orchestrator | 2026-03-19 00:02:35.117942 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-19 00:02:35.117949 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-19 00:02:35.117956 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-19 00:02:35.117962 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-19 00:02:35.117987 | orchestrator | + all_metadata = (known after apply) 2026-03-19 00:02:35.117994 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.118001 | orchestrator | + availability_zone = "nova" 2026-03-19 00:02:35.118007 | orchestrator | + config_drive = true 2026-03-19 00:02:35.118032 | orchestrator | + created = (known after apply) 2026-03-19 00:02:35.118041 | orchestrator | + flavor_id = (known after apply) 2026-03-19 00:02:35.118047 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-19 00:02:35.118054 | orchestrator | + force_delete = false 2026-03-19 00:02:35.118060 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-19 00:02:35.118067 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.118074 | orchestrator | + image_id = (known after apply) 2026-03-19 00:02:35.118080 | orchestrator | + image_name = (known after apply) 2026-03-19 00:02:35.118087 | orchestrator | + key_pair = "testbed" 2026-03-19 00:02:35.118093 | orchestrator | + name = "testbed-node-5" 2026-03-19 00:02:35.118100 | orchestrator | + power_state = "active" 2026-03-19 00:02:35.118111 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.118118 | orchestrator | + security_groups = (known after apply) 2026-03-19 00:02:35.118125 | orchestrator | + stop_before_destroy = false 2026-03-19 00:02:35.118131 | orchestrator | + updated = (known after apply) 2026-03-19 00:02:35.118138 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-19 00:02:35.118145 | orchestrator | 2026-03-19 00:02:35.118151 | orchestrator | + block_device { 2026-03-19 00:02:35.118158 | orchestrator | + boot_index = 0 2026-03-19 00:02:35.118165 | orchestrator | + delete_on_termination = false 2026-03-19 00:02:35.118172 | orchestrator | + destination_type = "volume" 2026-03-19 00:02:35.118178 | orchestrator | + multiattach = false 2026-03-19 00:02:35.118185 | orchestrator | + source_type = "volume" 2026-03-19 00:02:35.118191 | orchestrator | + uuid = (known after apply) 2026-03-19 00:02:35.118198 | orchestrator | } 2026-03-19 00:02:35.118204 | orchestrator | 2026-03-19 00:02:35.118211 | orchestrator | + network { 2026-03-19 00:02:35.118217 | orchestrator | + access_network = false 2026-03-19 00:02:35.118224 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-19 00:02:35.118231 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-19 00:02:35.118237 | orchestrator | + mac = (known after apply) 2026-03-19 00:02:35.118244 | orchestrator | + name = (known after apply) 2026-03-19 00:02:35.118250 | orchestrator | + port = (known after apply) 2026-03-19 00:02:35.118257 | orchestrator | + uuid = (known after apply) 2026-03-19 00:02:35.118264 | orchestrator | } 2026-03-19 00:02:35.118270 | orchestrator | } 2026-03-19 00:02:35.118277 | orchestrator | 2026-03-19 00:02:35.118283 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-19 00:02:35.118290 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-19 00:02:35.118296 | orchestrator | + fingerprint = (known after apply) 2026-03-19 00:02:35.118303 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.118310 | orchestrator | + name = "testbed" 2026-03-19 00:02:35.118316 | orchestrator | + private_key = (sensitive value) 2026-03-19 00:02:35.118323 | orchestrator | + public_key = (known after apply) 2026-03-19 00:02:35.118329 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.118336 | orchestrator | + user_id = (known after apply) 2026-03-19 00:02:35.118343 | orchestrator | } 2026-03-19 00:02:35.118349 | orchestrator | 2026-03-19 00:02:35.118356 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-19 00:02:35.118363 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 00:02:35.118374 | orchestrator | + device = (known after apply) 2026-03-19 00:02:35.118381 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.118388 | orchestrator | + instance_id = (known after apply) 2026-03-19 00:02:35.118394 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.118405 | orchestrator | + volume_id = (known after apply) 2026-03-19 00:02:35.118412 | orchestrator | } 2026-03-19 00:02:35.118419 | orchestrator | 2026-03-19 00:02:35.118425 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-19 00:02:35.118432 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 00:02:35.118439 | orchestrator | + device = (known after apply) 2026-03-19 00:02:35.118446 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.118453 | orchestrator | + instance_id = (known after apply) 2026-03-19 00:02:35.118459 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.118466 | orchestrator | + volume_id = (known after apply) 2026-03-19 00:02:35.118472 | orchestrator | } 2026-03-19 00:02:35.118479 | orchestrator | 2026-03-19 00:02:35.118486 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-19 00:02:35.118493 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 00:02:35.118499 | orchestrator | + device = (known after apply) 2026-03-19 00:02:35.118506 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.118513 | orchestrator | + instance_id = (known after apply) 2026-03-19 00:02:35.118520 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.118526 | orchestrator | + volume_id = (known after apply) 2026-03-19 00:02:35.118533 | orchestrator | } 2026-03-19 00:02:35.118539 | orchestrator | 2026-03-19 00:02:35.118546 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-19 00:02:35.118553 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 00:02:35.118559 | orchestrator | + device = (known after apply) 2026-03-19 00:02:35.118566 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.118572 | orchestrator | + instance_id = (known after apply) 2026-03-19 00:02:35.118579 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.118586 | orchestrator | + volume_id = (known after apply) 2026-03-19 00:02:35.118592 | orchestrator | } 2026-03-19 00:02:35.118599 | orchestrator | 2026-03-19 00:02:35.118606 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-19 00:02:35.118613 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 00:02:35.118619 | orchestrator | + device = (known after apply) 2026-03-19 00:02:35.118626 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.118632 | orchestrator | + instance_id = (known after apply) 2026-03-19 00:02:35.118639 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.118645 | orchestrator | + volume_id = (known after apply) 2026-03-19 00:02:35.118652 | orchestrator | } 2026-03-19 00:02:35.118658 | orchestrator | 2026-03-19 00:02:35.118665 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-19 00:02:35.118672 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 00:02:35.118679 | orchestrator | + device = (known after apply) 2026-03-19 00:02:35.118686 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.118692 | orchestrator | + instance_id = (known after apply) 2026-03-19 00:02:35.118699 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.118705 | orchestrator | + volume_id = (known after apply) 2026-03-19 00:02:35.118712 | orchestrator | } 2026-03-19 00:02:35.118718 | orchestrator | 2026-03-19 00:02:35.118725 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-19 00:02:35.118732 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 00:02:35.118738 | orchestrator | + device = (known after apply) 2026-03-19 00:02:35.118745 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.118751 | orchestrator | + instance_id = (known after apply) 2026-03-19 00:02:35.118758 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.118768 | orchestrator | + volume_id = (known after apply) 2026-03-19 00:02:35.118775 | orchestrator | } 2026-03-19 00:02:35.118782 | orchestrator | 2026-03-19 00:02:35.118788 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-19 00:02:35.118795 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 00:02:35.118808 | orchestrator | + device = (known after apply) 2026-03-19 00:02:35.118815 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.118822 | orchestrator | + instance_id = (known after apply) 2026-03-19 00:02:35.118828 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.118835 | orchestrator | + volume_id = (known after apply) 2026-03-19 00:02:35.118842 | orchestrator | } 2026-03-19 00:02:35.118848 | orchestrator | 2026-03-19 00:02:35.118855 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-19 00:02:35.118862 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 00:02:35.118868 | orchestrator | + device = (known after apply) 2026-03-19 00:02:35.118875 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.118882 | orchestrator | + instance_id = (known after apply) 2026-03-19 00:02:35.118888 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.118895 | orchestrator | + volume_id = (known after apply) 2026-03-19 00:02:35.118901 | orchestrator | } 2026-03-19 00:02:35.118908 | orchestrator | 2026-03-19 00:02:35.118915 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-19 00:02:35.118923 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-19 00:02:35.118929 | orchestrator | + fixed_ip = (known after apply) 2026-03-19 00:02:35.118936 | orchestrator | + floating_ip = (known after apply) 2026-03-19 00:02:35.118943 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.118949 | orchestrator | + port_id = (known after apply) 2026-03-19 00:02:35.118956 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.118962 | orchestrator | } 2026-03-19 00:02:35.118990 | orchestrator | 2026-03-19 00:02:35.119003 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-19 00:02:35.119014 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-19 00:02:35.119024 | orchestrator | + address = (known after apply) 2026-03-19 00:02:35.119034 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.119045 | orchestrator | + dns_domain = (known after apply) 2026-03-19 00:02:35.119052 | orchestrator | + dns_name = (known after apply) 2026-03-19 00:02:35.119058 | orchestrator | + fixed_ip = (known after apply) 2026-03-19 00:02:35.119065 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.119071 | orchestrator | + pool = "public" 2026-03-19 00:02:35.119078 | orchestrator | + port_id = (known after apply) 2026-03-19 00:02:35.119085 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.119092 | orchestrator | + subnet_id = (known after apply) 2026-03-19 00:02:35.119098 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.119105 | orchestrator | } 2026-03-19 00:02:35.119112 | orchestrator | 2026-03-19 00:02:35.119118 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-19 00:02:35.119125 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-19 00:02:35.119132 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 00:02:35.119138 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.119145 | orchestrator | + availability_zone_hints = [ 2026-03-19 00:02:35.119152 | orchestrator | + "nova", 2026-03-19 00:02:35.119158 | orchestrator | ] 2026-03-19 00:02:35.119165 | orchestrator | + dns_domain = (known after apply) 2026-03-19 00:02:35.119171 | orchestrator | + external = (known after apply) 2026-03-19 00:02:35.119178 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.119185 | orchestrator | + mtu = (known after apply) 2026-03-19 00:02:35.119191 | orchestrator | + name = "net-testbed-management" 2026-03-19 00:02:35.119198 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 00:02:35.119209 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 00:02:35.119216 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.119222 | orchestrator | + shared = (known after apply) 2026-03-19 00:02:35.119229 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.119236 | orchestrator | + transparent_vlan = (known after apply) 2026-03-19 00:02:35.119242 | orchestrator | 2026-03-19 00:02:35.119249 | orchestrator | + segments (known after apply) 2026-03-19 00:02:35.119256 | orchestrator | } 2026-03-19 00:02:35.119262 | orchestrator | 2026-03-19 00:02:35.119269 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-19 00:02:35.119276 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-19 00:02:35.119285 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 00:02:35.119296 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-19 00:02:35.119306 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-19 00:02:35.119316 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.119327 | orchestrator | + device_id = (known after apply) 2026-03-19 00:02:35.119337 | orchestrator | + device_owner = (known after apply) 2026-03-19 00:02:35.119348 | orchestrator | + dns_assignment = (known after apply) 2026-03-19 00:02:35.119358 | orchestrator | + dns_name = (known after apply) 2026-03-19 00:02:35.119369 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.119380 | orchestrator | + mac_address = (known after apply) 2026-03-19 00:02:35.119391 | orchestrator | + network_id = (known after apply) 2026-03-19 00:02:35.119401 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 00:02:35.119412 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 00:02:35.119419 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.119426 | orchestrator | + security_group_ids = (known after apply) 2026-03-19 00:02:35.119432 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.119439 | orchestrator | 2026-03-19 00:02:35.119445 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.119452 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-19 00:02:35.119459 | orchestrator | } 2026-03-19 00:02:35.119465 | orchestrator | 2026-03-19 00:02:35.119472 | orchestrator | + binding (known after apply) 2026-03-19 00:02:35.119479 | orchestrator | 2026-03-19 00:02:35.119486 | orchestrator | + fixed_ip { 2026-03-19 00:02:35.119492 | orchestrator | + ip_address = "192.168.16.5" 2026-03-19 00:02:35.119499 | orchestrator | + subnet_id = (known after apply) 2026-03-19 00:02:35.119505 | orchestrator | } 2026-03-19 00:02:35.119512 | orchestrator | } 2026-03-19 00:02:35.119519 | orchestrator | 2026-03-19 00:02:35.119525 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-19 00:02:35.119532 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-19 00:02:35.119539 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 00:02:35.119546 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-19 00:02:35.119552 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-19 00:02:35.119563 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.119570 | orchestrator | + device_id = (known after apply) 2026-03-19 00:02:35.119577 | orchestrator | + device_owner = (known after apply) 2026-03-19 00:02:35.119583 | orchestrator | + dns_assignment = (known after apply) 2026-03-19 00:02:35.119590 | orchestrator | + dns_name = (known after apply) 2026-03-19 00:02:35.119596 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.119603 | orchestrator | + mac_address = (known after apply) 2026-03-19 00:02:35.119610 | orchestrator | + network_id = (known after apply) 2026-03-19 00:02:35.119616 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 00:02:35.119623 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 00:02:35.119629 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.119641 | orchestrator | + security_group_ids = (known after apply) 2026-03-19 00:02:35.119648 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.119655 | orchestrator | 2026-03-19 00:02:35.119661 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.119668 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-19 00:02:35.119675 | orchestrator | } 2026-03-19 00:02:35.119681 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.119688 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-19 00:02:35.119695 | orchestrator | } 2026-03-19 00:02:35.119701 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.119708 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-19 00:02:35.119714 | orchestrator | } 2026-03-19 00:02:35.119721 | orchestrator | 2026-03-19 00:02:35.119728 | orchestrator | + binding (known after apply) 2026-03-19 00:02:35.119734 | orchestrator | 2026-03-19 00:02:35.119741 | orchestrator | + fixed_ip { 2026-03-19 00:02:35.119747 | orchestrator | + ip_address = "192.168.16.10" 2026-03-19 00:02:35.119754 | orchestrator | + subnet_id = (known after apply) 2026-03-19 00:02:35.119761 | orchestrator | } 2026-03-19 00:02:35.119767 | orchestrator | } 2026-03-19 00:02:35.119774 | orchestrator | 2026-03-19 00:02:35.119781 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-19 00:02:35.119787 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-19 00:02:35.119798 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 00:02:35.119805 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-19 00:02:35.119812 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-19 00:02:35.119818 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.119825 | orchestrator | + device_id = (known after apply) 2026-03-19 00:02:35.119831 | orchestrator | + device_owner = (known after apply) 2026-03-19 00:02:35.119838 | orchestrator | + dns_assignment = (known after apply) 2026-03-19 00:02:35.119845 | orchestrator | + dns_name = (known after apply) 2026-03-19 00:02:35.119851 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.119858 | orchestrator | + mac_address = (known after apply) 2026-03-19 00:02:35.119864 | orchestrator | + network_id = (known after apply) 2026-03-19 00:02:35.119871 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 00:02:35.119877 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 00:02:35.119884 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.119890 | orchestrator | + security_group_ids = (known after apply) 2026-03-19 00:02:35.119897 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.119903 | orchestrator | 2026-03-19 00:02:35.119910 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.119916 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-19 00:02:35.119924 | orchestrator | } 2026-03-19 00:02:35.119930 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.119937 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-19 00:02:35.119944 | orchestrator | } 2026-03-19 00:02:35.119950 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.119957 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-19 00:02:35.119981 | orchestrator | } 2026-03-19 00:02:35.119994 | orchestrator | 2026-03-19 00:02:35.120001 | orchestrator | + binding (known after apply) 2026-03-19 00:02:35.120007 | orchestrator | 2026-03-19 00:02:35.120014 | orchestrator | + fixed_ip { 2026-03-19 00:02:35.120020 | orchestrator | + ip_address = "192.168.16.11" 2026-03-19 00:02:35.120027 | orchestrator | + subnet_id = (known after apply) 2026-03-19 00:02:35.120034 | orchestrator | } 2026-03-19 00:02:35.120040 | orchestrator | } 2026-03-19 00:02:35.120047 | orchestrator | 2026-03-19 00:02:35.120054 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-19 00:02:35.120060 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-19 00:02:35.120067 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 00:02:35.120073 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-19 00:02:35.120080 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-19 00:02:35.120087 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.120099 | orchestrator | + device_id = (known after apply) 2026-03-19 00:02:35.120105 | orchestrator | + device_owner = (known after apply) 2026-03-19 00:02:35.120112 | orchestrator | + dns_assignment = (known after apply) 2026-03-19 00:02:35.120118 | orchestrator | + dns_name = (known after apply) 2026-03-19 00:02:35.120125 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.120132 | orchestrator | + mac_address = (known after apply) 2026-03-19 00:02:35.120138 | orchestrator | + network_id = (known after apply) 2026-03-19 00:02:35.120144 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 00:02:35.120151 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 00:02:35.120157 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.120164 | orchestrator | + security_group_ids = (known after apply) 2026-03-19 00:02:35.120170 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.120177 | orchestrator | 2026-03-19 00:02:35.120184 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.120190 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-19 00:02:35.120197 | orchestrator | } 2026-03-19 00:02:35.120203 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.120254 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-19 00:02:35.120261 | orchestrator | } 2026-03-19 00:02:35.120268 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.120275 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-19 00:02:35.120281 | orchestrator | } 2026-03-19 00:02:35.120288 | orchestrator | 2026-03-19 00:02:35.120295 | orchestrator | + binding (known after apply) 2026-03-19 00:02:35.120301 | orchestrator | 2026-03-19 00:02:35.120308 | orchestrator | + fixed_ip { 2026-03-19 00:02:35.120315 | orchestrator | + ip_address = "192.168.16.12" 2026-03-19 00:02:35.120321 | orchestrator | + subnet_id = (known after apply) 2026-03-19 00:02:35.120328 | orchestrator | } 2026-03-19 00:02:35.120335 | orchestrator | } 2026-03-19 00:02:35.120342 | orchestrator | 2026-03-19 00:02:35.120349 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-19 00:02:35.120355 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-19 00:02:35.120367 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 00:02:35.120374 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-19 00:02:35.120380 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-19 00:02:35.120387 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.120394 | orchestrator | + device_id = (known after apply) 2026-03-19 00:02:35.120422 | orchestrator | + device_owner = (known after apply) 2026-03-19 00:02:35.120429 | orchestrator | + dns_assignment = (known after apply) 2026-03-19 00:02:35.120436 | orchestrator | + dns_name = (known after apply) 2026-03-19 00:02:35.120442 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.120449 | orchestrator | + mac_address = (known after apply) 2026-03-19 00:02:35.120456 | orchestrator | + network_id = (known after apply) 2026-03-19 00:02:35.120462 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 00:02:35.120469 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 00:02:35.120475 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.120482 | orchestrator | + security_group_ids = (known after apply) 2026-03-19 00:02:35.120489 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.120495 | orchestrator | 2026-03-19 00:02:35.120502 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.120509 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-19 00:02:35.120516 | orchestrator | } 2026-03-19 00:02:35.120522 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.120529 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-19 00:02:35.120535 | orchestrator | } 2026-03-19 00:02:35.120542 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.120549 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-19 00:02:35.120556 | orchestrator | } 2026-03-19 00:02:35.120563 | orchestrator | 2026-03-19 00:02:35.120574 | orchestrator | + binding (known after apply) 2026-03-19 00:02:35.120581 | orchestrator | 2026-03-19 00:02:35.120598 | orchestrator | + fixed_ip { 2026-03-19 00:02:35.120605 | orchestrator | + ip_address = "192.168.16.13" 2026-03-19 00:02:35.120612 | orchestrator | + subnet_id = (known after apply) 2026-03-19 00:02:35.120619 | orchestrator | } 2026-03-19 00:02:35.120625 | orchestrator | } 2026-03-19 00:02:35.120632 | orchestrator | 2026-03-19 00:02:35.120639 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-19 00:02:35.120646 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-19 00:02:35.120652 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 00:02:35.120659 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-19 00:02:35.120666 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-19 00:02:35.120673 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.120679 | orchestrator | + device_id = (known after apply) 2026-03-19 00:02:35.120686 | orchestrator | + device_owner = (known after apply) 2026-03-19 00:02:35.120692 | orchestrator | + dns_assignment = (known after apply) 2026-03-19 00:02:35.120699 | orchestrator | + dns_name = (known after apply) 2026-03-19 00:02:35.120709 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.120716 | orchestrator | + mac_address = (known after apply) 2026-03-19 00:02:35.120723 | orchestrator | + network_id = (known after apply) 2026-03-19 00:02:35.120730 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 00:02:35.120737 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 00:02:35.120743 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.120750 | orchestrator | + security_group_ids = (known after apply) 2026-03-19 00:02:35.120757 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.120764 | orchestrator | 2026-03-19 00:02:35.120771 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.120781 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-19 00:02:35.120788 | orchestrator | } 2026-03-19 00:02:35.120795 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.120802 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-19 00:02:35.120808 | orchestrator | } 2026-03-19 00:02:35.120815 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.120822 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-19 00:02:35.120828 | orchestrator | } 2026-03-19 00:02:35.120835 | orchestrator | 2026-03-19 00:02:35.120842 | orchestrator | + binding (known after apply) 2026-03-19 00:02:35.120849 | orchestrator | 2026-03-19 00:02:35.120855 | orchestrator | + fixed_ip { 2026-03-19 00:02:35.120862 | orchestrator | + ip_address = "192.168.16.14" 2026-03-19 00:02:35.120869 | orchestrator | + subnet_id = (known after apply) 2026-03-19 00:02:35.120875 | orchestrator | } 2026-03-19 00:02:35.120882 | orchestrator | } 2026-03-19 00:02:35.120889 | orchestrator | 2026-03-19 00:02:35.120895 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-19 00:02:35.120902 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-19 00:02:35.120909 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 00:02:35.120915 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-19 00:02:35.120922 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-19 00:02:35.120928 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.120935 | orchestrator | + device_id = (known after apply) 2026-03-19 00:02:35.120942 | orchestrator | + device_owner = (known after apply) 2026-03-19 00:02:35.120948 | orchestrator | + dns_assignment = (known after apply) 2026-03-19 00:02:35.120955 | orchestrator | + dns_name = (known after apply) 2026-03-19 00:02:35.120981 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.120993 | orchestrator | + mac_address = (known after apply) 2026-03-19 00:02:35.121004 | orchestrator | + network_id = (known after apply) 2026-03-19 00:02:35.121015 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 00:02:35.121025 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 00:02:35.121042 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.121053 | orchestrator | + security_group_ids = (known after apply) 2026-03-19 00:02:35.121065 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.121076 | orchestrator | 2026-03-19 00:02:35.121087 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.121111 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-19 00:02:35.121121 | orchestrator | } 2026-03-19 00:02:35.121128 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.121135 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-19 00:02:35.121150 | orchestrator | } 2026-03-19 00:02:35.121157 | orchestrator | + allowed_address_pairs { 2026-03-19 00:02:35.121164 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-19 00:02:35.121170 | orchestrator | } 2026-03-19 00:02:35.121177 | orchestrator | 2026-03-19 00:02:35.121184 | orchestrator | + binding (known after apply) 2026-03-19 00:02:35.121191 | orchestrator | 2026-03-19 00:02:35.121198 | orchestrator | + fixed_ip { 2026-03-19 00:02:35.121204 | orchestrator | + ip_address = "192.168.16.15" 2026-03-19 00:02:35.121211 | orchestrator | + subnet_id = (known after apply) 2026-03-19 00:02:35.121218 | orchestrator | } 2026-03-19 00:02:35.121224 | orchestrator | } 2026-03-19 00:02:35.121231 | orchestrator | 2026-03-19 00:02:35.121246 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-19 00:02:35.121253 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-19 00:02:35.121259 | orchestrator | + force_destroy = false 2026-03-19 00:02:35.121266 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.121273 | orchestrator | + port_id = (known after apply) 2026-03-19 00:02:35.121280 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.121287 | orchestrator | + router_id = (known after apply) 2026-03-19 00:02:35.121294 | orchestrator | + subnet_id = (known after apply) 2026-03-19 00:02:35.121300 | orchestrator | } 2026-03-19 00:02:35.121307 | orchestrator | 2026-03-19 00:02:35.121314 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-19 00:02:35.121321 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-19 00:02:35.121327 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 00:02:35.121334 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.121341 | orchestrator | + availability_zone_hints = [ 2026-03-19 00:02:35.121347 | orchestrator | + "nova", 2026-03-19 00:02:35.121354 | orchestrator | ] 2026-03-19 00:02:35.121361 | orchestrator | + distributed = (known after apply) 2026-03-19 00:02:35.121368 | orchestrator | + enable_snat = (known after apply) 2026-03-19 00:02:35.121374 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-19 00:02:35.121382 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-19 00:02:35.121388 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.121395 | orchestrator | + name = "testbed" 2026-03-19 00:02:35.121402 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.121408 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.121415 | orchestrator | 2026-03-19 00:02:35.121422 | orchestrator | + external_fixed_ip (known after apply) 2026-03-19 00:02:35.121429 | orchestrator | } 2026-03-19 00:02:35.121436 | orchestrator | 2026-03-19 00:02:35.121443 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-19 00:02:35.121451 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-19 00:02:35.121457 | orchestrator | + description = "ssh" 2026-03-19 00:02:35.121464 | orchestrator | + direction = "ingress" 2026-03-19 00:02:35.121471 | orchestrator | + ethertype = "IPv4" 2026-03-19 00:02:35.121477 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.121492 | orchestrator | + port_range_max = 22 2026-03-19 00:02:35.121499 | orchestrator | + port_range_min = 22 2026-03-19 00:02:35.121506 | orchestrator | + protocol = "tcp" 2026-03-19 00:02:35.121513 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.121528 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 00:02:35.121535 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 00:02:35.121542 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-19 00:02:35.121549 | orchestrator | + security_group_id = (known after apply) 2026-03-19 00:02:35.121556 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.121563 | orchestrator | } 2026-03-19 00:02:35.121570 | orchestrator | 2026-03-19 00:02:35.121576 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-19 00:02:35.121586 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-19 00:02:35.121598 | orchestrator | + description = "wireguard" 2026-03-19 00:02:35.121610 | orchestrator | + direction = "ingress" 2026-03-19 00:02:35.121621 | orchestrator | + ethertype = "IPv4" 2026-03-19 00:02:35.121632 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.121643 | orchestrator | + port_range_max = 51820 2026-03-19 00:02:35.121699 | orchestrator | + port_range_min = 51820 2026-03-19 00:02:35.121714 | orchestrator | + protocol = "udp" 2026-03-19 00:02:35.121724 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.121745 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 00:02:35.121756 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 00:02:35.121766 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-19 00:02:35.121777 | orchestrator | + security_group_id = (known after apply) 2026-03-19 00:02:35.121787 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.121797 | orchestrator | } 2026-03-19 00:02:35.121807 | orchestrator | 2026-03-19 00:02:35.121817 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-19 00:02:35.121827 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-19 00:02:35.121847 | orchestrator | + direction = "ingress" 2026-03-19 00:02:35.121859 | orchestrator | + ethertype = "IPv4" 2026-03-19 00:02:35.121870 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.121880 | orchestrator | + protocol = "tcp" 2026-03-19 00:02:35.121891 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.121901 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 00:02:35.121912 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 00:02:35.121923 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-19 00:02:35.121933 | orchestrator | + security_group_id = (known after apply) 2026-03-19 00:02:35.121944 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.121954 | orchestrator | } 2026-03-19 00:02:35.122040 | orchestrator | 2026-03-19 00:02:35.122072 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-19 00:02:35.122085 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-19 00:02:35.122097 | orchestrator | + direction = "ingress" 2026-03-19 00:02:35.122119 | orchestrator | + ethertype = "IPv4" 2026-03-19 00:02:35.122126 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.122133 | orchestrator | + protocol = "udp" 2026-03-19 00:02:35.122170 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.122177 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 00:02:35.122184 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 00:02:35.122191 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-19 00:02:35.122198 | orchestrator | + security_group_id = (known after apply) 2026-03-19 00:02:35.122219 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.122226 | orchestrator | } 2026-03-19 00:02:35.122233 | orchestrator | 2026-03-19 00:02:35.122240 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-19 00:02:35.122257 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-19 00:02:35.122264 | orchestrator | + direction = "ingress" 2026-03-19 00:02:35.122271 | orchestrator | + ethertype = "IPv4" 2026-03-19 00:02:35.122278 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.122284 | orchestrator | + protocol = "icmp" 2026-03-19 00:02:35.122291 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.122298 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 00:02:35.122304 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 00:02:35.122311 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-19 00:02:35.122318 | orchestrator | + security_group_id = (known after apply) 2026-03-19 00:02:35.122324 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.122331 | orchestrator | } 2026-03-19 00:02:35.122337 | orchestrator | 2026-03-19 00:02:35.122344 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-19 00:02:35.122351 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-19 00:02:35.122358 | orchestrator | + direction = "ingress" 2026-03-19 00:02:35.122364 | orchestrator | + ethertype = "IPv4" 2026-03-19 00:02:35.122371 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.122377 | orchestrator | + protocol = "tcp" 2026-03-19 00:02:35.122384 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.122391 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 00:02:35.122397 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 00:02:35.122404 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-19 00:02:35.122410 | orchestrator | + security_group_id = (known after apply) 2026-03-19 00:02:35.122417 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.122423 | orchestrator | } 2026-03-19 00:02:35.122430 | orchestrator | 2026-03-19 00:02:35.122436 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-19 00:02:35.122443 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-19 00:02:35.122449 | orchestrator | + direction = "ingress" 2026-03-19 00:02:35.122456 | orchestrator | + ethertype = "IPv4" 2026-03-19 00:02:35.122463 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.122469 | orchestrator | + protocol = "udp" 2026-03-19 00:02:35.122476 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.122482 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 00:02:35.122489 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 00:02:35.122496 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-19 00:02:35.122502 | orchestrator | + security_group_id = (known after apply) 2026-03-19 00:02:35.122509 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.122515 | orchestrator | } 2026-03-19 00:02:35.122522 | orchestrator | 2026-03-19 00:02:35.122528 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-19 00:02:35.122536 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-19 00:02:35.122542 | orchestrator | + direction = "ingress" 2026-03-19 00:02:35.122549 | orchestrator | + ethertype = "IPv4" 2026-03-19 00:02:35.122555 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.122561 | orchestrator | + protocol = "icmp" 2026-03-19 00:02:35.122567 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.122573 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 00:02:35.122579 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 00:02:35.122585 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-19 00:02:35.122592 | orchestrator | + security_group_id = (known after apply) 2026-03-19 00:02:35.122598 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.122608 | orchestrator | } 2026-03-19 00:02:35.122615 | orchestrator | 2026-03-19 00:02:35.122621 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-19 00:02:35.122627 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-19 00:02:35.122633 | orchestrator | + description = "vrrp" 2026-03-19 00:02:35.122640 | orchestrator | + direction = "ingress" 2026-03-19 00:02:35.122646 | orchestrator | + ethertype = "IPv4" 2026-03-19 00:02:35.122652 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.122658 | orchestrator | + protocol = "112" 2026-03-19 00:02:35.122664 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.122671 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 00:02:35.122677 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 00:02:35.122683 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-19 00:02:35.122689 | orchestrator | + security_group_id = (known after apply) 2026-03-19 00:02:35.122695 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.122701 | orchestrator | } 2026-03-19 00:02:35.122707 | orchestrator | 2026-03-19 00:02:35.122714 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-19 00:02:35.122720 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-19 00:02:35.122727 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.122733 | orchestrator | + description = "management security group" 2026-03-19 00:02:35.122739 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.122745 | orchestrator | + name = "testbed-management" 2026-03-19 00:02:35.122751 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.122758 | orchestrator | + stateful = (known after apply) 2026-03-19 00:02:35.122764 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.122770 | orchestrator | } 2026-03-19 00:02:35.122776 | orchestrator | 2026-03-19 00:02:35.122782 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-19 00:02:35.122788 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-19 00:02:35.122794 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.122806 | orchestrator | + description = "node security group" 2026-03-19 00:02:35.122817 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.122827 | orchestrator | + name = "testbed-node" 2026-03-19 00:02:35.122837 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.122847 | orchestrator | + stateful = (known after apply) 2026-03-19 00:02:35.122870 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.122880 | orchestrator | } 2026-03-19 00:02:35.122890 | orchestrator | 2026-03-19 00:02:35.122902 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-19 00:02:35.122908 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-19 00:02:35.122914 | orchestrator | + all_tags = (known after apply) 2026-03-19 00:02:35.122921 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-19 00:02:35.122927 | orchestrator | + dns_nameservers = [ 2026-03-19 00:02:35.122941 | orchestrator | + "8.8.8.8", 2026-03-19 00:02:35.122948 | orchestrator | + "9.9.9.9", 2026-03-19 00:02:35.122954 | orchestrator | ] 2026-03-19 00:02:35.122960 | orchestrator | + enable_dhcp = true 2026-03-19 00:02:35.122987 | orchestrator | + gateway_ip = (known after apply) 2026-03-19 00:02:35.123005 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.123016 | orchestrator | + ip_version = 4 2026-03-19 00:02:35.123026 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-19 00:02:35.123037 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-19 00:02:35.123044 | orchestrator | + name = "subnet-testbed-management" 2026-03-19 00:02:35.123050 | orchestrator | + network_id = (known after apply) 2026-03-19 00:02:35.123056 | orchestrator | + no_gateway = false 2026-03-19 00:02:35.123062 | orchestrator | + region = (known after apply) 2026-03-19 00:02:35.123068 | orchestrator | + service_types = (known after apply) 2026-03-19 00:02:35.123080 | orchestrator | + tenant_id = (known after apply) 2026-03-19 00:02:35.123086 | orchestrator | 2026-03-19 00:02:35.123092 | orchestrator | + allocation_pool { 2026-03-19 00:02:35.123098 | orchestrator | + end = "192.168.31.250" 2026-03-19 00:02:35.123104 | orchestrator | + start = "192.168.31.200" 2026-03-19 00:02:35.123111 | orchestrator | } 2026-03-19 00:02:35.123117 | orchestrator | } 2026-03-19 00:02:35.123133 | orchestrator | 2026-03-19 00:02:35.123139 | orchestrator | # terraform_data.image will be created 2026-03-19 00:02:35.123146 | orchestrator | + resource "terraform_data" "image" { 2026-03-19 00:02:35.123152 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.123158 | orchestrator | + input = "Ubuntu 24.04" 2026-03-19 00:02:35.123164 | orchestrator | + output = (known after apply) 2026-03-19 00:02:35.123170 | orchestrator | } 2026-03-19 00:02:35.123176 | orchestrator | 2026-03-19 00:02:35.123183 | orchestrator | # terraform_data.image_node will be created 2026-03-19 00:02:35.123189 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-19 00:02:35.123195 | orchestrator | + id = (known after apply) 2026-03-19 00:02:35.123201 | orchestrator | + input = "Ubuntu 24.04" 2026-03-19 00:02:35.123207 | orchestrator | + output = (known after apply) 2026-03-19 00:02:35.123213 | orchestrator | } 2026-03-19 00:02:35.123219 | orchestrator | 2026-03-19 00:02:35.123225 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-19 00:02:35.123231 | orchestrator | 2026-03-19 00:02:35.123237 | orchestrator | Changes to Outputs: 2026-03-19 00:02:35.123244 | orchestrator | + manager_address = (sensitive value) 2026-03-19 00:02:35.123250 | orchestrator | + private_key = (sensitive value) 2026-03-19 00:02:35.389169 | orchestrator | terraform_data.image: Creating... 2026-03-19 00:02:35.389400 | orchestrator | terraform_data.image_node: Creating... 2026-03-19 00:02:35.394086 | orchestrator | terraform_data.image: Creation complete after 0s [id=3d53ffac-67b5-d339-909b-6bebb6f53669] 2026-03-19 00:02:35.394131 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=8eb0903a-aa5f-303f-e28b-483013438eff] 2026-03-19 00:02:35.399912 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-19 00:02:35.402123 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-19 00:02:35.403872 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-19 00:02:35.404295 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-19 00:02:35.408135 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-19 00:02:35.408182 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-19 00:02:35.408190 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-19 00:02:35.410252 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-19 00:02:35.413253 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-19 00:02:35.422953 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-19 00:02:35.881072 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-19 00:02:35.884020 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-19 00:02:35.888852 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-19 00:02:35.889562 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-19 00:02:35.947949 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-19 00:02:35.955724 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-19 00:02:36.511429 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=e0443fe3-35b9-40dc-a20d-e482df741eee] 2026-03-19 00:02:36.526146 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-19 00:02:39.061799 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=e4aaa0c2-0099-489f-8e98-802ea2f51c85] 2026-03-19 00:02:39.374554 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=d5727df1-f3c7-4916-bc14-eaaddd40c7b3] 2026-03-19 00:02:39.374647 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-19 00:02:39.374674 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-19 00:02:39.374732 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=4a32226a-f2aa-4275-90ab-4bb1f7d2ca8f] 2026-03-19 00:02:39.374745 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-19 00:02:39.374756 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=2e1c1462-5959-44f4-a623-e25e33d313c5] 2026-03-19 00:02:39.374767 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-19 00:02:39.374778 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=cc7e233c-8cac-4df4-a011-c93cbddae1f1] 2026-03-19 00:02:39.374788 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-19 00:02:39.374799 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=28009f26-7505-45c2-833e-d396e3f8b400] 2026-03-19 00:02:39.374810 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-19 00:02:39.374820 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=29b2bcaa-94cd-4bc7-8ad5-d8dbf0337361] 2026-03-19 00:02:39.374831 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=184b1ce1-ca15-4595-9678-ac68bfb03600] 2026-03-19 00:02:39.374862 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=c44a5407-fe6a-4dde-8ed4-2e9072e3ed0d] 2026-03-19 00:02:39.374885 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-19 00:02:39.374898 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-19 00:02:39.374911 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-19 00:02:39.380140 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=1f3799e1aec46b77600f065e33fbb4ce93b5fa5b] 2026-03-19 00:02:39.381230 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=73ece3b85869edde4999d6c0158faac442537020] 2026-03-19 00:02:39.875586 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=b18cbe89-f105-4871-b702-3e5de725da02] 2026-03-19 00:02:40.786010 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=1d8e8b55-53ae-4465-a066-64cedaa31da1] 2026-03-19 00:02:40.794800 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-19 00:02:42.469109 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=1685bc7d-e4e0-4b87-bbb4-7dc843c2418d] 2026-03-19 00:02:42.579505 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=3a8b28d7-84b9-47da-9987-4ea2478cc2a4] 2026-03-19 00:02:42.589323 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=8048828a-f8fb-40bf-8a3c-f28dd7047b99] 2026-03-19 00:02:42.600222 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99] 2026-03-19 00:02:42.620309 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=9a641ab7-974c-4f28-9787-11bbad1144db] 2026-03-19 00:02:42.636027 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=ae8b1f95-6d42-4cca-804f-b3321e20a38b] 2026-03-19 00:02:47.800591 | orchestrator | openstack_networking_router_v2.router: Creation complete after 7s [id=ee55bdd1-39b7-4b59-98a2-64fd42d3f8fc] 2026-03-19 00:02:47.811337 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-19 00:02:47.811437 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-19 00:02:47.812114 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-19 00:02:48.036091 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=f5ce98d6-9395-4b1f-8fce-ff1bd1a31d89] 2026-03-19 00:02:48.044012 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=8343aaa1-814b-45ef-bc89-cf1558c52602] 2026-03-19 00:02:48.050662 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-19 00:02:48.050901 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-19 00:02:48.051269 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-19 00:02:48.051545 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-19 00:02:48.052121 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-19 00:02:48.052739 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-19 00:02:48.056633 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-19 00:02:48.057824 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-19 00:02:48.058286 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-19 00:02:48.391810 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=6f010a95-25bd-4f57-8d33-e552f3f0c195] 2026-03-19 00:02:48.409199 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-19 00:02:48.443676 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=a0987b9e-67c4-42af-bccb-cf5152e14b6f] 2026-03-19 00:02:48.456062 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-19 00:02:48.574851 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=5cb53939-ae54-435d-90a2-0ce5028682c5] 2026-03-19 00:02:48.587561 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-19 00:02:48.797023 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=e1a30bfd-f0e6-4a42-9dbd-b9712b71f2ec] 2026-03-19 00:02:48.806315 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-19 00:02:48.907446 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=40bf435a-86d8-4c26-8e12-2054ab23e7cd] 2026-03-19 00:02:48.919807 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-19 00:02:49.128375 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=d8f70375-a310-4c5d-8bc1-80bd443edc83] 2026-03-19 00:02:49.136210 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-19 00:02:49.193360 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=5f35c046-ac7e-465e-a185-75f26baae30b] 2026-03-19 00:02:49.199341 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-19 00:02:49.549402 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=93e046f6-133e-4221-af90-e5da59af8a2f] 2026-03-19 00:02:49.726203 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=35b267e8-eb00-430d-98ae-9b40abe5569c] 2026-03-19 00:02:49.823188 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=a2a02619-3064-4c96-95bf-0cd152f759d9] 2026-03-19 00:02:50.004886 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=d8fcf612-8099-49b7-a4f8-33a1916cb5cf] 2026-03-19 00:02:50.025382 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=048830e9-b021-455d-9c16-88cf060b5a22] 2026-03-19 00:02:50.206866 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=6a4d5947-d9ef-4fee-b33c-9a4cd5959d99] 2026-03-19 00:02:50.330742 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=59b61709-f904-41e7-a0c6-77cc95b5d1e9] 2026-03-19 00:02:50.363515 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=19e5cbcf-73aa-4c26-942e-2972573ae33d] 2026-03-19 00:02:50.826379 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=df14e4e7-43ef-4965-9107-d73f41ae9c2b] 2026-03-19 00:02:51.222501 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=212db67c-96a5-48e4-a968-a6d8571d980c] 2026-03-19 00:02:51.239119 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-19 00:02:51.245454 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-19 00:02:51.259826 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-19 00:02:51.261992 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-19 00:02:51.262700 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-19 00:02:51.263594 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-19 00:02:51.278285 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-19 00:02:53.554176 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=b806f6e4-13d0-41eb-8ed5-d5b07241038c] 2026-03-19 00:02:53.565875 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-19 00:02:53.570810 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-19 00:02:53.574479 | orchestrator | local_file.inventory: Creating... 2026-03-19 00:02:53.576673 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=cc5c18cd9b46a6fdebe0f8627fbea6b933658e5f] 2026-03-19 00:02:53.580258 | orchestrator | local_file.inventory: Creation complete after 0s [id=c4c1530b333294bcb375c4353229b88ede0db389] 2026-03-19 00:02:54.501157 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=b806f6e4-13d0-41eb-8ed5-d5b07241038c] 2026-03-19 00:03:01.248635 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-19 00:03:01.263252 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-19 00:03:01.264389 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-19 00:03:01.264523 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-19 00:03:01.268599 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-19 00:03:01.279066 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-19 00:03:11.254417 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-19 00:03:11.263820 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-19 00:03:11.265072 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-19 00:03:11.265104 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-19 00:03:11.269580 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-19 00:03:11.280001 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-19 00:03:21.264230 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-19 00:03:21.264328 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-19 00:03:21.265594 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-19 00:03:21.265745 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-19 00:03:21.270215 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-19 00:03:21.280702 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-19 00:03:31.273632 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-03-19 00:03:31.273759 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-19 00:03:31.273783 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-19 00:03:31.273801 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-19 00:03:31.273819 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-19 00:03:31.280943 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-19 00:03:41.282327 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-03-19 00:03:41.282428 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-03-19 00:03:41.282435 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-03-19 00:03:41.282448 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-03-19 00:03:41.282455 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-03-19 00:03:41.282462 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-03-19 00:03:42.163626 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 51s [id=4080c931-2520-43da-88de-77eeff2cbfa4] 2026-03-19 00:03:42.538617 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 52s [id=4b78aabd-abce-4e43-9d97-b077c71754eb] 2026-03-19 00:03:42.684965 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 52s [id=60165b46-bccd-4df1-b2c6-ae2e01393fc3] 2026-03-19 00:03:51.286330 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m0s elapsed] 2026-03-19 00:03:51.286432 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m0s elapsed] 2026-03-19 00:03:51.286443 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m0s elapsed] 2026-03-19 00:03:52.228511 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 1m1s [id=416e0526-9138-4469-88ee-5bcd7c16767c] 2026-03-19 00:04:01.294166 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m10s elapsed] 2026-03-19 00:04:01.294234 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m10s elapsed] 2026-03-19 00:04:02.265452 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 1m11s [id=ca6b166c-5eb5-40e0-be63-9521afdfd546] 2026-03-19 00:04:02.496832 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m11s [id=810b55fa-9077-47fc-b4e1-dc3ff8b8d933] 2026-03-19 00:04:02.506437 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-19 00:04:02.526718 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-19 00:04:02.528659 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=937819393412680963] 2026-03-19 00:04:02.533864 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-19 00:04:02.534276 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-19 00:04:02.534422 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-19 00:04:02.543009 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-19 00:04:02.552038 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-19 00:04:02.561279 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-19 00:04:02.565589 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-19 00:04:02.567947 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-19 00:04:02.579667 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-19 00:04:06.001455 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=ca6b166c-5eb5-40e0-be63-9521afdfd546/c44a5407-fe6a-4dde-8ed4-2e9072e3ed0d] 2026-03-19 00:04:06.020635 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=60165b46-bccd-4df1-b2c6-ae2e01393fc3/d5727df1-f3c7-4916-bc14-eaaddd40c7b3] 2026-03-19 00:04:06.029679 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=416e0526-9138-4469-88ee-5bcd7c16767c/e4aaa0c2-0099-489f-8e98-802ea2f51c85] 2026-03-19 00:04:06.055685 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=60165b46-bccd-4df1-b2c6-ae2e01393fc3/184b1ce1-ca15-4595-9678-ac68bfb03600] 2026-03-19 00:04:06.056792 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=ca6b166c-5eb5-40e0-be63-9521afdfd546/29b2bcaa-94cd-4bc7-8ad5-d8dbf0337361] 2026-03-19 00:04:06.076989 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=416e0526-9138-4469-88ee-5bcd7c16767c/28009f26-7505-45c2-833e-d396e3f8b400] 2026-03-19 00:04:12.164518 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 9s [id=ca6b166c-5eb5-40e0-be63-9521afdfd546/4a32226a-f2aa-4275-90ab-4bb1f7d2ca8f] 2026-03-19 00:04:12.182905 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=60165b46-bccd-4df1-b2c6-ae2e01393fc3/cc7e233c-8cac-4df4-a011-c93cbddae1f1] 2026-03-19 00:04:12.192034 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=416e0526-9138-4469-88ee-5bcd7c16767c/2e1c1462-5959-44f4-a623-e25e33d313c5] 2026-03-19 00:04:12.581095 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-19 00:04:22.581357 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-19 00:04:23.343513 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=7353e226-156e-4415-b271-5eccd6ad0f58] 2026-03-19 00:04:23.406546 | orchestrator | 2026-03-19 00:04:23.406598 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-19 00:04:23.406611 | orchestrator | 2026-03-19 00:04:23.406616 | orchestrator | Outputs: 2026-03-19 00:04:23.406620 | orchestrator | 2026-03-19 00:04:23.406624 | orchestrator | manager_address = 2026-03-19 00:04:23.406628 | orchestrator | private_key = 2026-03-19 00:04:23.483994 | orchestrator | ok: Runtime: 0:02:03.092894 2026-03-19 00:04:23.507073 | 2026-03-19 00:04:23.507225 | TASK [Create infrastructure (stable)] 2026-03-19 00:04:24.050935 | orchestrator | skipping: Conditional result was False 2026-03-19 00:04:24.071042 | 2026-03-19 00:04:24.071193 | TASK [Fetch manager address] 2026-03-19 00:04:24.560928 | orchestrator | ok 2026-03-19 00:04:24.571510 | 2026-03-19 00:04:24.571658 | TASK [Set manager_host address] 2026-03-19 00:04:24.652880 | orchestrator | ok 2026-03-19 00:04:24.662967 | 2026-03-19 00:04:24.663119 | LOOP [Update ansible collections] 2026-03-19 00:04:25.578610 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-19 00:04:25.579098 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-19 00:04:25.579251 | orchestrator | Starting galaxy collection install process 2026-03-19 00:04:25.579306 | orchestrator | Process install dependency map 2026-03-19 00:04:25.579348 | orchestrator | Starting collection install process 2026-03-19 00:04:25.579408 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2026-03-19 00:04:25.579456 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2026-03-19 00:04:25.579515 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-19 00:04:25.579613 | orchestrator | ok: Item: commons Runtime: 0:00:00.569702 2026-03-19 00:04:26.560396 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-19 00:04:26.560565 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-19 00:04:26.560617 | orchestrator | Starting galaxy collection install process 2026-03-19 00:04:26.560655 | orchestrator | Process install dependency map 2026-03-19 00:04:26.560708 | orchestrator | Starting collection install process 2026-03-19 00:04:26.560743 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2026-03-19 00:04:26.560776 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2026-03-19 00:04:26.560807 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-19 00:04:26.560858 | orchestrator | ok: Item: services Runtime: 0:00:00.648658 2026-03-19 00:04:26.579777 | 2026-03-19 00:04:26.579939 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-19 00:04:37.133792 | orchestrator | ok 2026-03-19 00:04:37.144640 | 2026-03-19 00:04:37.144757 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-19 00:05:37.187935 | orchestrator | ok 2026-03-19 00:05:37.195667 | 2026-03-19 00:05:37.195778 | TASK [Fetch manager ssh hostkey] 2026-03-19 00:05:39.014765 | orchestrator | Output suppressed because no_log was given 2026-03-19 00:05:39.026606 | 2026-03-19 00:05:39.026794 | TASK [Get ssh keypair from terraform environment] 2026-03-19 00:05:39.578429 | orchestrator | ok: Runtime: 0:00:00.011509 2026-03-19 00:05:39.593904 | 2026-03-19 00:05:39.594137 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-19 00:05:39.643083 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-19 00:05:39.653688 | 2026-03-19 00:05:39.653818 | TASK [Run manager part 0] 2026-03-19 00:05:40.797370 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-19 00:05:40.863900 | orchestrator | 2026-03-19 00:05:40.863956 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-19 00:05:40.863967 | orchestrator | 2026-03-19 00:05:40.863989 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-19 00:05:42.695925 | orchestrator | ok: [testbed-manager] 2026-03-19 00:05:42.695968 | orchestrator | 2026-03-19 00:05:42.695990 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-19 00:05:42.695999 | orchestrator | 2026-03-19 00:05:42.696007 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 00:05:44.631806 | orchestrator | ok: [testbed-manager] 2026-03-19 00:05:44.631847 | orchestrator | 2026-03-19 00:05:44.631859 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-19 00:05:45.260449 | orchestrator | ok: [testbed-manager] 2026-03-19 00:05:45.260473 | orchestrator | 2026-03-19 00:05:45.260479 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-19 00:05:45.297638 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:05:45.297665 | orchestrator | 2026-03-19 00:05:45.297672 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-19 00:05:45.328515 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:05:45.328547 | orchestrator | 2026-03-19 00:05:45.328556 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-19 00:05:45.356521 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:05:45.356554 | orchestrator | 2026-03-19 00:05:45.356564 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-19 00:05:45.379174 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:05:45.379198 | orchestrator | 2026-03-19 00:05:45.379204 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-19 00:05:45.403365 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:05:45.403465 | orchestrator | 2026-03-19 00:05:45.403474 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-19 00:05:45.431279 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:05:45.431308 | orchestrator | 2026-03-19 00:05:45.431316 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-19 00:05:45.484458 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:05:45.484492 | orchestrator | 2026-03-19 00:05:45.484503 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-19 00:05:46.176628 | orchestrator | changed: [testbed-manager] 2026-03-19 00:05:46.176664 | orchestrator | 2026-03-19 00:05:46.176673 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-19 00:08:50.906178 | orchestrator | changed: [testbed-manager] 2026-03-19 00:08:50.906296 | orchestrator | 2026-03-19 00:08:50.906327 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-19 00:10:22.304382 | orchestrator | changed: [testbed-manager] 2026-03-19 00:10:22.304455 | orchestrator | 2026-03-19 00:10:22.304472 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-19 00:10:46.472668 | orchestrator | changed: [testbed-manager] 2026-03-19 00:10:46.472757 | orchestrator | 2026-03-19 00:10:46.472774 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-19 00:10:56.911599 | orchestrator | changed: [testbed-manager] 2026-03-19 00:10:56.911704 | orchestrator | 2026-03-19 00:10:56.911730 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-19 00:10:56.957843 | orchestrator | ok: [testbed-manager] 2026-03-19 00:10:56.957892 | orchestrator | 2026-03-19 00:10:56.957900 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-19 00:10:57.722164 | orchestrator | ok: [testbed-manager] 2026-03-19 00:10:57.722491 | orchestrator | 2026-03-19 00:10:57.722517 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-19 00:10:58.435342 | orchestrator | changed: [testbed-manager] 2026-03-19 00:10:58.435406 | orchestrator | 2026-03-19 00:10:58.435416 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-19 00:11:05.962856 | orchestrator | changed: [testbed-manager] 2026-03-19 00:11:05.963068 | orchestrator | 2026-03-19 00:11:05.963116 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-19 00:11:11.645704 | orchestrator | changed: [testbed-manager] 2026-03-19 00:11:11.645772 | orchestrator | 2026-03-19 00:11:11.645783 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-19 00:11:14.229183 | orchestrator | changed: [testbed-manager] 2026-03-19 00:11:14.229276 | orchestrator | 2026-03-19 00:11:14.229294 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-19 00:11:15.987915 | orchestrator | changed: [testbed-manager] 2026-03-19 00:11:15.988011 | orchestrator | 2026-03-19 00:11:15.988029 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-19 00:11:17.099971 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-19 00:11:17.100105 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-19 00:11:17.100114 | orchestrator | 2026-03-19 00:11:17.100120 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-19 00:11:17.148119 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-19 00:11:17.148232 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-19 00:11:17.148248 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-19 00:11:17.148261 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-19 00:11:20.428589 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-19 00:11:20.428640 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-19 00:11:20.428646 | orchestrator | 2026-03-19 00:11:20.428652 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-19 00:11:20.989533 | orchestrator | changed: [testbed-manager] 2026-03-19 00:11:20.989597 | orchestrator | 2026-03-19 00:11:20.989605 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-19 00:13:43.844108 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-19 00:13:43.844175 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-19 00:13:43.844185 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-19 00:13:43.844191 | orchestrator | 2026-03-19 00:13:43.844197 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-19 00:13:46.124504 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-19 00:13:46.124577 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-19 00:13:46.124585 | orchestrator | 2026-03-19 00:13:46.124593 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-19 00:13:46.124600 | orchestrator | 2026-03-19 00:13:46.124618 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 00:13:47.492992 | orchestrator | ok: [testbed-manager] 2026-03-19 00:13:47.493030 | orchestrator | 2026-03-19 00:13:47.493039 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-19 00:13:47.535210 | orchestrator | ok: [testbed-manager] 2026-03-19 00:13:47.535260 | orchestrator | 2026-03-19 00:13:47.535266 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-19 00:13:47.591043 | orchestrator | ok: [testbed-manager] 2026-03-19 00:13:47.591078 | orchestrator | 2026-03-19 00:13:47.591084 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-19 00:13:48.383225 | orchestrator | changed: [testbed-manager] 2026-03-19 00:13:48.383287 | orchestrator | 2026-03-19 00:13:48.383298 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-19 00:13:49.076628 | orchestrator | changed: [testbed-manager] 2026-03-19 00:13:49.076671 | orchestrator | 2026-03-19 00:13:49.076679 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-19 00:13:50.408283 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-19 00:13:50.408325 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-19 00:13:50.408337 | orchestrator | 2026-03-19 00:13:50.408359 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-19 00:13:51.761328 | orchestrator | changed: [testbed-manager] 2026-03-19 00:13:51.761453 | orchestrator | 2026-03-19 00:13:51.761479 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-19 00:13:53.496207 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-19 00:13:53.496247 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-19 00:13:53.496254 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-19 00:13:53.496259 | orchestrator | 2026-03-19 00:13:53.496266 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-19 00:13:53.559379 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:13:53.559413 | orchestrator | 2026-03-19 00:13:53.559418 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-19 00:13:53.625997 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:13:53.626055 | orchestrator | 2026-03-19 00:13:53.626064 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-19 00:13:54.197429 | orchestrator | changed: [testbed-manager] 2026-03-19 00:13:54.197526 | orchestrator | 2026-03-19 00:13:54.197542 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-19 00:13:54.269773 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:13:54.269827 | orchestrator | 2026-03-19 00:13:54.269833 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-19 00:13:55.166257 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-19 00:13:55.166346 | orchestrator | changed: [testbed-manager] 2026-03-19 00:13:55.166359 | orchestrator | 2026-03-19 00:13:55.166371 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-19 00:13:55.207578 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:13:55.208719 | orchestrator | 2026-03-19 00:13:55.208755 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-19 00:13:55.244689 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:13:55.244755 | orchestrator | 2026-03-19 00:13:55.244766 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-19 00:13:55.284326 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:13:55.284518 | orchestrator | 2026-03-19 00:13:55.284540 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-19 00:13:55.381981 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:13:55.382116 | orchestrator | 2026-03-19 00:13:55.382134 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-19 00:13:56.102262 | orchestrator | ok: [testbed-manager] 2026-03-19 00:13:56.102356 | orchestrator | 2026-03-19 00:13:56.102373 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-19 00:13:56.102387 | orchestrator | 2026-03-19 00:13:56.102398 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 00:13:57.469631 | orchestrator | ok: [testbed-manager] 2026-03-19 00:13:57.469716 | orchestrator | 2026-03-19 00:13:57.469733 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-19 00:13:58.431942 | orchestrator | changed: [testbed-manager] 2026-03-19 00:13:58.432027 | orchestrator | 2026-03-19 00:13:58.432240 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:13:58.432259 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-19 00:13:58.432271 | orchestrator | 2026-03-19 00:13:59.026276 | orchestrator | ok: Runtime: 0:08:18.498597 2026-03-19 00:13:59.043915 | 2026-03-19 00:13:59.044111 | TASK [Point out that the log in on the manager is now possible] 2026-03-19 00:13:59.082564 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-19 00:13:59.093755 | 2026-03-19 00:13:59.093868 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-19 00:13:59.124848 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-19 00:13:59.131942 | 2026-03-19 00:13:59.132062 | TASK [Run manager part 1 + 2] 2026-03-19 00:14:00.046119 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-19 00:14:00.110087 | orchestrator | 2026-03-19 00:14:00.110142 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-19 00:14:00.110149 | orchestrator | 2026-03-19 00:14:00.110162 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 00:14:02.935745 | orchestrator | ok: [testbed-manager] 2026-03-19 00:14:02.935799 | orchestrator | 2026-03-19 00:14:02.935824 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-19 00:14:02.968473 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:14:02.968520 | orchestrator | 2026-03-19 00:14:02.968529 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-19 00:14:02.998725 | orchestrator | ok: [testbed-manager] 2026-03-19 00:14:02.998785 | orchestrator | 2026-03-19 00:14:02.998797 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-19 00:14:03.028410 | orchestrator | ok: [testbed-manager] 2026-03-19 00:14:03.028455 | orchestrator | 2026-03-19 00:14:03.028462 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-19 00:14:03.101381 | orchestrator | ok: [testbed-manager] 2026-03-19 00:14:03.101428 | orchestrator | 2026-03-19 00:14:03.101436 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-19 00:14:03.169831 | orchestrator | ok: [testbed-manager] 2026-03-19 00:14:03.169901 | orchestrator | 2026-03-19 00:14:03.169909 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-19 00:14:03.216808 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-19 00:14:03.216851 | orchestrator | 2026-03-19 00:14:03.216857 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-19 00:14:03.908529 | orchestrator | ok: [testbed-manager] 2026-03-19 00:14:03.908578 | orchestrator | 2026-03-19 00:14:03.908586 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-19 00:14:03.960557 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:14:03.960598 | orchestrator | 2026-03-19 00:14:03.960604 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-19 00:14:05.277803 | orchestrator | changed: [testbed-manager] 2026-03-19 00:14:05.277903 | orchestrator | 2026-03-19 00:14:05.277914 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-19 00:14:05.816841 | orchestrator | ok: [testbed-manager] 2026-03-19 00:14:05.816944 | orchestrator | 2026-03-19 00:14:05.816961 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-19 00:14:06.896745 | orchestrator | changed: [testbed-manager] 2026-03-19 00:14:06.896801 | orchestrator | 2026-03-19 00:14:06.896809 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-19 00:14:21.946329 | orchestrator | changed: [testbed-manager] 2026-03-19 00:14:21.946453 | orchestrator | 2026-03-19 00:14:21.946470 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-19 00:14:22.641766 | orchestrator | ok: [testbed-manager] 2026-03-19 00:14:22.641823 | orchestrator | 2026-03-19 00:14:22.641833 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-19 00:14:22.699958 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:14:22.700019 | orchestrator | 2026-03-19 00:14:22.700029 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-19 00:14:23.671295 | orchestrator | changed: [testbed-manager] 2026-03-19 00:14:23.671594 | orchestrator | 2026-03-19 00:14:23.671625 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-19 00:14:24.619692 | orchestrator | changed: [testbed-manager] 2026-03-19 00:14:24.619735 | orchestrator | 2026-03-19 00:14:24.619745 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-19 00:14:25.179283 | orchestrator | changed: [testbed-manager] 2026-03-19 00:14:25.179398 | orchestrator | 2026-03-19 00:14:25.179416 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-19 00:14:25.227003 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-19 00:14:25.227075 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-19 00:14:25.227081 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-19 00:14:25.227086 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-19 00:14:27.314799 | orchestrator | changed: [testbed-manager] 2026-03-19 00:14:27.314897 | orchestrator | 2026-03-19 00:14:27.314908 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-19 00:14:36.046529 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-19 00:14:36.046672 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-19 00:14:36.046697 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-19 00:14:36.046709 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-19 00:14:36.046730 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-19 00:14:36.046767 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-19 00:14:36.046780 | orchestrator | 2026-03-19 00:14:36.046793 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-19 00:14:37.025093 | orchestrator | changed: [testbed-manager] 2026-03-19 00:14:37.025178 | orchestrator | 2026-03-19 00:14:37.025195 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-19 00:14:37.073723 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:14:37.073808 | orchestrator | 2026-03-19 00:14:37.073827 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-19 00:14:40.170751 | orchestrator | changed: [testbed-manager] 2026-03-19 00:14:40.170911 | orchestrator | 2026-03-19 00:14:40.170942 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-19 00:14:40.210889 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:14:40.210943 | orchestrator | 2026-03-19 00:14:40.210951 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-19 00:16:13.862750 | orchestrator | changed: [testbed-manager] 2026-03-19 00:16:13.862855 | orchestrator | 2026-03-19 00:16:13.862876 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-19 00:16:14.964206 | orchestrator | ok: [testbed-manager] 2026-03-19 00:16:14.964245 | orchestrator | 2026-03-19 00:16:14.964251 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:16:14.964259 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-19 00:16:14.964264 | orchestrator | 2026-03-19 00:16:15.294720 | orchestrator | ok: Runtime: 0:02:15.600178 2026-03-19 00:16:15.310955 | 2026-03-19 00:16:15.311146 | TASK [Reboot manager] 2026-03-19 00:16:16.847529 | orchestrator | ok: Runtime: 0:00:00.950020 2026-03-19 00:16:16.859059 | 2026-03-19 00:16:16.859195 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-19 00:16:31.453712 | orchestrator | ok 2026-03-19 00:16:31.464882 | 2026-03-19 00:16:31.465007 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-19 00:17:31.507857 | orchestrator | ok 2026-03-19 00:17:31.515930 | 2026-03-19 00:17:31.516184 | TASK [Deploy manager + bootstrap nodes] 2026-03-19 00:17:33.984285 | orchestrator | + set -e 2026-03-19 00:17:33.984526 | orchestrator | 2026-03-19 00:17:33.984555 | orchestrator | # DEPLOY MANAGER 2026-03-19 00:17:33.984569 | orchestrator | 2026-03-19 00:17:33.984583 | orchestrator | + echo 2026-03-19 00:17:33.984597 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-19 00:17:33.984614 | orchestrator | + echo 2026-03-19 00:17:33.984684 | orchestrator | + cat /opt/manager-vars.sh 2026-03-19 00:17:33.988050 | orchestrator | export NUMBER_OF_NODES=6 2026-03-19 00:17:33.988101 | orchestrator | 2026-03-19 00:17:33.988113 | orchestrator | export CEPH_VERSION=reef 2026-03-19 00:17:33.988127 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-19 00:17:33.988139 | orchestrator | export MANAGER_VERSION=latest 2026-03-19 00:17:33.988165 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-19 00:17:33.988176 | orchestrator | 2026-03-19 00:17:33.988194 | orchestrator | export ARA=false 2026-03-19 00:17:33.988205 | orchestrator | export DEPLOY_MODE=manager 2026-03-19 00:17:33.988223 | orchestrator | export TEMPEST=true 2026-03-19 00:17:33.988234 | orchestrator | export IS_ZUUL=true 2026-03-19 00:17:33.988244 | orchestrator | 2026-03-19 00:17:33.988262 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.218 2026-03-19 00:17:33.988273 | orchestrator | export EXTERNAL_API=false 2026-03-19 00:17:33.988284 | orchestrator | 2026-03-19 00:17:33.988295 | orchestrator | export IMAGE_USER=ubuntu 2026-03-19 00:17:33.988309 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-19 00:17:33.988320 | orchestrator | 2026-03-19 00:17:33.988331 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-19 00:17:33.988348 | orchestrator | 2026-03-19 00:17:33.988359 | orchestrator | + echo 2026-03-19 00:17:33.988371 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 00:17:33.989128 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 00:17:33.989146 | orchestrator | ++ INTERACTIVE=false 2026-03-19 00:17:33.989159 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 00:17:33.989176 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 00:17:33.989363 | orchestrator | + source /opt/manager-vars.sh 2026-03-19 00:17:33.989380 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-19 00:17:33.989393 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-19 00:17:33.989405 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-19 00:17:33.989415 | orchestrator | ++ CEPH_VERSION=reef 2026-03-19 00:17:33.989430 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-19 00:17:33.989442 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-19 00:17:33.989453 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-19 00:17:33.989463 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-19 00:17:33.989474 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-19 00:17:33.989496 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-19 00:17:33.989507 | orchestrator | ++ export ARA=false 2026-03-19 00:17:33.989518 | orchestrator | ++ ARA=false 2026-03-19 00:17:33.989529 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-19 00:17:33.989539 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-19 00:17:33.989550 | orchestrator | ++ export TEMPEST=true 2026-03-19 00:17:33.989560 | orchestrator | ++ TEMPEST=true 2026-03-19 00:17:33.989571 | orchestrator | ++ export IS_ZUUL=true 2026-03-19 00:17:33.989585 | orchestrator | ++ IS_ZUUL=true 2026-03-19 00:17:33.989596 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.218 2026-03-19 00:17:33.989607 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.218 2026-03-19 00:17:33.989618 | orchestrator | ++ export EXTERNAL_API=false 2026-03-19 00:17:33.989628 | orchestrator | ++ EXTERNAL_API=false 2026-03-19 00:17:33.989639 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-19 00:17:33.989649 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-19 00:17:33.989660 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-19 00:17:33.989671 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-19 00:17:33.989682 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-19 00:17:33.989693 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-19 00:17:33.989704 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-19 00:17:34.043605 | orchestrator | + docker version 2026-03-19 00:17:34.149490 | orchestrator | Client: Docker Engine - Community 2026-03-19 00:17:34.149613 | orchestrator | Version: 27.5.1 2026-03-19 00:17:34.149629 | orchestrator | API version: 1.47 2026-03-19 00:17:34.149644 | orchestrator | Go version: go1.22.11 2026-03-19 00:17:34.149655 | orchestrator | Git commit: 9f9e405 2026-03-19 00:17:34.149667 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-19 00:17:34.149679 | orchestrator | OS/Arch: linux/amd64 2026-03-19 00:17:34.149691 | orchestrator | Context: default 2026-03-19 00:17:34.149702 | orchestrator | 2026-03-19 00:17:34.149713 | orchestrator | Server: Docker Engine - Community 2026-03-19 00:17:34.149724 | orchestrator | Engine: 2026-03-19 00:17:34.149736 | orchestrator | Version: 27.5.1 2026-03-19 00:17:34.149747 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-19 00:17:34.149794 | orchestrator | Go version: go1.22.11 2026-03-19 00:17:34.149806 | orchestrator | Git commit: 4c9b3b0 2026-03-19 00:17:34.149817 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-19 00:17:34.149827 | orchestrator | OS/Arch: linux/amd64 2026-03-19 00:17:34.149838 | orchestrator | Experimental: false 2026-03-19 00:17:34.149849 | orchestrator | containerd: 2026-03-19 00:17:34.149860 | orchestrator | Version: v2.2.2 2026-03-19 00:17:34.149871 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-19 00:17:34.149882 | orchestrator | runc: 2026-03-19 00:17:34.149893 | orchestrator | Version: 1.3.4 2026-03-19 00:17:34.149963 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-19 00:17:34.149976 | orchestrator | docker-init: 2026-03-19 00:17:34.149987 | orchestrator | Version: 0.19.0 2026-03-19 00:17:34.149999 | orchestrator | GitCommit: de40ad0 2026-03-19 00:17:34.153062 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-19 00:17:34.166534 | orchestrator | + set -e 2026-03-19 00:17:34.166640 | orchestrator | + source /opt/manager-vars.sh 2026-03-19 00:17:34.166656 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-19 00:17:34.166670 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-19 00:17:34.166681 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-19 00:17:34.166692 | orchestrator | ++ CEPH_VERSION=reef 2026-03-19 00:17:34.166703 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-19 00:17:34.166715 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-19 00:17:34.166726 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-19 00:17:34.166737 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-19 00:17:34.166748 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-19 00:17:34.166759 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-19 00:17:34.166769 | orchestrator | ++ export ARA=false 2026-03-19 00:17:34.166780 | orchestrator | ++ ARA=false 2026-03-19 00:17:34.166791 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-19 00:17:34.166802 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-19 00:17:34.166812 | orchestrator | ++ export TEMPEST=true 2026-03-19 00:17:34.166823 | orchestrator | ++ TEMPEST=true 2026-03-19 00:17:34.166833 | orchestrator | ++ export IS_ZUUL=true 2026-03-19 00:17:34.166844 | orchestrator | ++ IS_ZUUL=true 2026-03-19 00:17:34.166855 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.218 2026-03-19 00:17:34.166866 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.218 2026-03-19 00:17:34.166876 | orchestrator | ++ export EXTERNAL_API=false 2026-03-19 00:17:34.166887 | orchestrator | ++ EXTERNAL_API=false 2026-03-19 00:17:34.166930 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-19 00:17:34.166947 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-19 00:17:34.166958 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-19 00:17:34.166968 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-19 00:17:34.166979 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-19 00:17:34.166989 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-19 00:17:34.167000 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 00:17:34.167011 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 00:17:34.167021 | orchestrator | ++ INTERACTIVE=false 2026-03-19 00:17:34.167032 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 00:17:34.167047 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 00:17:34.167058 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-19 00:17:34.167068 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-19 00:17:34.167079 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-19 00:17:34.173786 | orchestrator | + set -e 2026-03-19 00:17:34.173852 | orchestrator | + VERSION=reef 2026-03-19 00:17:34.174800 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-19 00:17:34.183944 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-19 00:17:34.184005 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-19 00:17:34.188313 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-19 00:17:34.195399 | orchestrator | + set -e 2026-03-19 00:17:34.195437 | orchestrator | + VERSION=2024.2 2026-03-19 00:17:34.196401 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-19 00:17:34.200279 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-19 00:17:34.200335 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-19 00:17:34.204593 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-19 00:17:34.205338 | orchestrator | ++ semver latest 7.0.0 2026-03-19 00:17:34.262614 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-19 00:17:34.262707 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-19 00:17:34.262721 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-19 00:17:34.263357 | orchestrator | ++ semver latest 10.0.0-0 2026-03-19 00:17:34.317091 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-19 00:17:34.317304 | orchestrator | ++ semver 2024.2 2025.1 2026-03-19 00:17:34.370153 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-19 00:17:34.370289 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-19 00:17:34.464627 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-19 00:17:34.468116 | orchestrator | + source /opt/venv/bin/activate 2026-03-19 00:17:34.469557 | orchestrator | ++ deactivate nondestructive 2026-03-19 00:17:34.469585 | orchestrator | ++ '[' -n '' ']' 2026-03-19 00:17:34.469597 | orchestrator | ++ '[' -n '' ']' 2026-03-19 00:17:34.469608 | orchestrator | ++ hash -r 2026-03-19 00:17:34.469619 | orchestrator | ++ '[' -n '' ']' 2026-03-19 00:17:34.469630 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-19 00:17:34.469641 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-19 00:17:34.469654 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-19 00:17:34.469666 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-19 00:17:34.469676 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-19 00:17:34.469687 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-19 00:17:34.469698 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-19 00:17:34.469710 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-19 00:17:34.469722 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-19 00:17:34.469733 | orchestrator | ++ export PATH 2026-03-19 00:17:34.469743 | orchestrator | ++ '[' -n '' ']' 2026-03-19 00:17:34.469754 | orchestrator | ++ '[' -z '' ']' 2026-03-19 00:17:34.469795 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-19 00:17:34.469806 | orchestrator | ++ PS1='(venv) ' 2026-03-19 00:17:34.469817 | orchestrator | ++ export PS1 2026-03-19 00:17:34.469828 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-19 00:17:34.469840 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-19 00:17:34.469851 | orchestrator | ++ hash -r 2026-03-19 00:17:34.470761 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-19 00:17:35.742131 | orchestrator | 2026-03-19 00:17:35.742247 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-19 00:17:35.742265 | orchestrator | 2026-03-19 00:17:35.742277 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-19 00:17:36.320765 | orchestrator | ok: [testbed-manager] 2026-03-19 00:17:36.320929 | orchestrator | 2026-03-19 00:17:36.320953 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-19 00:17:37.278359 | orchestrator | changed: [testbed-manager] 2026-03-19 00:17:37.278460 | orchestrator | 2026-03-19 00:17:37.278476 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-19 00:17:37.278489 | orchestrator | 2026-03-19 00:17:37.278500 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 00:17:39.845344 | orchestrator | ok: [testbed-manager] 2026-03-19 00:17:39.887977 | orchestrator | 2026-03-19 00:17:39.888071 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-19 00:17:39.897558 | orchestrator | ok: [testbed-manager] 2026-03-19 00:17:39.897629 | orchestrator | 2026-03-19 00:17:39.897645 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-19 00:17:40.349682 | orchestrator | changed: [testbed-manager] 2026-03-19 00:17:40.349811 | orchestrator | 2026-03-19 00:17:40.349837 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-19 00:17:40.386823 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:17:40.387005 | orchestrator | 2026-03-19 00:17:40.387033 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-19 00:17:40.738197 | orchestrator | changed: [testbed-manager] 2026-03-19 00:17:40.738312 | orchestrator | 2026-03-19 00:17:40.738331 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-19 00:17:41.099726 | orchestrator | ok: [testbed-manager] 2026-03-19 00:17:41.099833 | orchestrator | 2026-03-19 00:17:41.099849 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-19 00:17:41.221751 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:17:41.221844 | orchestrator | 2026-03-19 00:17:41.221856 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-19 00:17:41.221866 | orchestrator | 2026-03-19 00:17:41.221874 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 00:17:43.059284 | orchestrator | ok: [testbed-manager] 2026-03-19 00:17:43.059405 | orchestrator | 2026-03-19 00:17:43.059421 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-19 00:17:43.159520 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-19 00:17:43.159625 | orchestrator | 2026-03-19 00:17:43.159641 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-19 00:17:43.221475 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-19 00:17:43.221530 | orchestrator | 2026-03-19 00:17:43.221543 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-19 00:17:44.370636 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-19 00:17:44.370744 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-19 00:17:44.370759 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-19 00:17:44.370771 | orchestrator | 2026-03-19 00:17:44.370783 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-19 00:17:46.236157 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-19 00:17:46.236271 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-19 00:17:46.236289 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-19 00:17:46.236302 | orchestrator | 2026-03-19 00:17:46.236314 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-19 00:17:46.927986 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-19 00:17:46.928087 | orchestrator | changed: [testbed-manager] 2026-03-19 00:17:46.928110 | orchestrator | 2026-03-19 00:17:46.928127 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-19 00:17:47.588427 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-19 00:17:47.588538 | orchestrator | changed: [testbed-manager] 2026-03-19 00:17:47.588554 | orchestrator | 2026-03-19 00:17:47.588567 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-19 00:17:47.649669 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:17:47.649769 | orchestrator | 2026-03-19 00:17:47.649783 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-19 00:17:48.022457 | orchestrator | ok: [testbed-manager] 2026-03-19 00:17:48.022569 | orchestrator | 2026-03-19 00:17:48.022588 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-19 00:17:48.112636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-19 00:17:48.112736 | orchestrator | 2026-03-19 00:17:48.112751 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-19 00:17:49.309751 | orchestrator | changed: [testbed-manager] 2026-03-19 00:17:49.309861 | orchestrator | 2026-03-19 00:17:49.309877 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-19 00:17:50.198351 | orchestrator | changed: [testbed-manager] 2026-03-19 00:17:50.198460 | orchestrator | 2026-03-19 00:17:50.198483 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-19 00:18:00.009499 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:00.009626 | orchestrator | 2026-03-19 00:18:00.009667 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-19 00:18:00.078267 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:18:00.078364 | orchestrator | 2026-03-19 00:18:00.078379 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-19 00:18:00.078392 | orchestrator | 2026-03-19 00:18:00.078404 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 00:18:02.079749 | orchestrator | ok: [testbed-manager] 2026-03-19 00:18:02.079879 | orchestrator | 2026-03-19 00:18:02.079990 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-19 00:18:02.183334 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-19 00:18:02.183436 | orchestrator | 2026-03-19 00:18:02.183479 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-19 00:18:02.242416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 00:18:02.242514 | orchestrator | 2026-03-19 00:18:02.242529 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-19 00:18:04.663397 | orchestrator | ok: [testbed-manager] 2026-03-19 00:18:04.663516 | orchestrator | 2026-03-19 00:18:04.663542 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-19 00:18:04.719724 | orchestrator | ok: [testbed-manager] 2026-03-19 00:18:04.719827 | orchestrator | 2026-03-19 00:18:04.719843 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-19 00:18:04.848845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-19 00:18:04.849002 | orchestrator | 2026-03-19 00:18:04.849022 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-19 00:18:07.658837 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-19 00:18:07.659002 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-19 00:18:07.659019 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-19 00:18:07.659031 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-19 00:18:07.659042 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-19 00:18:07.659053 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-19 00:18:07.659064 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-19 00:18:07.659075 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-19 00:18:07.659086 | orchestrator | 2026-03-19 00:18:07.659098 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-19 00:18:08.280406 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:08.280533 | orchestrator | 2026-03-19 00:18:08.280561 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-19 00:18:08.903154 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:08.903261 | orchestrator | 2026-03-19 00:18:08.903277 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-19 00:18:08.988315 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-19 00:18:08.988418 | orchestrator | 2026-03-19 00:18:08.988434 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-19 00:18:10.165877 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-19 00:18:10.165996 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-19 00:18:10.166009 | orchestrator | 2026-03-19 00:18:10.166072 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-19 00:18:10.805402 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:10.805511 | orchestrator | 2026-03-19 00:18:10.805531 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-19 00:18:10.849820 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:18:10.849947 | orchestrator | 2026-03-19 00:18:10.849962 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-19 00:18:10.931674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-19 00:18:10.931780 | orchestrator | 2026-03-19 00:18:10.931802 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-19 00:18:11.554636 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:11.554767 | orchestrator | 2026-03-19 00:18:11.554786 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-19 00:18:11.614802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-19 00:18:11.614997 | orchestrator | 2026-03-19 00:18:11.615018 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-19 00:18:13.009739 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-19 00:18:13.009855 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-19 00:18:13.009870 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:13.009883 | orchestrator | 2026-03-19 00:18:13.009895 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-19 00:18:13.629533 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:13.629642 | orchestrator | 2026-03-19 00:18:13.629659 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-19 00:18:13.691315 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:18:13.691414 | orchestrator | 2026-03-19 00:18:13.691429 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-19 00:18:13.780074 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-19 00:18:13.780196 | orchestrator | 2026-03-19 00:18:13.780227 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-19 00:18:14.296454 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:14.296554 | orchestrator | 2026-03-19 00:18:14.296593 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-19 00:18:14.662264 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:14.662361 | orchestrator | 2026-03-19 00:18:14.662375 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-19 00:18:15.843410 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-19 00:18:15.843557 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-19 00:18:15.843575 | orchestrator | 2026-03-19 00:18:15.843588 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-19 00:18:16.491137 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:16.491245 | orchestrator | 2026-03-19 00:18:16.491261 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-19 00:18:16.861255 | orchestrator | ok: [testbed-manager] 2026-03-19 00:18:16.861369 | orchestrator | 2026-03-19 00:18:16.861387 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-19 00:18:17.233052 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:17.233157 | orchestrator | 2026-03-19 00:18:17.233173 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-19 00:18:17.286583 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:18:17.286668 | orchestrator | 2026-03-19 00:18:17.286678 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-19 00:18:17.352352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-19 00:18:17.352436 | orchestrator | 2026-03-19 00:18:17.352445 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-19 00:18:17.385327 | orchestrator | ok: [testbed-manager] 2026-03-19 00:18:17.385438 | orchestrator | 2026-03-19 00:18:17.385453 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-19 00:18:19.420145 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-19 00:18:19.420266 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-19 00:18:19.420285 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-19 00:18:19.420297 | orchestrator | 2026-03-19 00:18:19.420310 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-19 00:18:20.144102 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:20.144213 | orchestrator | 2026-03-19 00:18:20.144229 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-19 00:18:20.842303 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:20.843244 | orchestrator | 2026-03-19 00:18:20.843287 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-19 00:18:21.559443 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:21.559554 | orchestrator | 2026-03-19 00:18:21.559574 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-19 00:18:21.637187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-19 00:18:21.637303 | orchestrator | 2026-03-19 00:18:21.637328 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-19 00:18:21.677549 | orchestrator | ok: [testbed-manager] 2026-03-19 00:18:21.677640 | orchestrator | 2026-03-19 00:18:21.677652 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-19 00:18:22.360418 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-19 00:18:22.360545 | orchestrator | 2026-03-19 00:18:22.360562 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-19 00:18:22.440414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-19 00:18:22.440517 | orchestrator | 2026-03-19 00:18:22.440531 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-19 00:18:23.136737 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:23.136832 | orchestrator | 2026-03-19 00:18:23.136847 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-19 00:18:23.798838 | orchestrator | ok: [testbed-manager] 2026-03-19 00:18:23.798962 | orchestrator | 2026-03-19 00:18:23.798974 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-19 00:18:23.863880 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:18:23.863985 | orchestrator | 2026-03-19 00:18:23.864001 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-19 00:18:23.925269 | orchestrator | ok: [testbed-manager] 2026-03-19 00:18:23.925347 | orchestrator | 2026-03-19 00:18:23.925359 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-19 00:18:24.825391 | orchestrator | changed: [testbed-manager] 2026-03-19 00:18:24.825509 | orchestrator | 2026-03-19 00:18:24.825525 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-19 00:19:32.860374 | orchestrator | changed: [testbed-manager] 2026-03-19 00:19:32.860505 | orchestrator | 2026-03-19 00:19:32.860521 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-19 00:19:33.765157 | orchestrator | ok: [testbed-manager] 2026-03-19 00:19:33.765267 | orchestrator | 2026-03-19 00:19:33.765284 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-19 00:19:33.819629 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:19:33.819722 | orchestrator | 2026-03-19 00:19:33.819736 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-19 00:19:35.891460 | orchestrator | changed: [testbed-manager] 2026-03-19 00:19:35.891553 | orchestrator | 2026-03-19 00:19:35.891566 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-19 00:19:35.962474 | orchestrator | ok: [testbed-manager] 2026-03-19 00:19:35.962581 | orchestrator | 2026-03-19 00:19:35.962622 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-19 00:19:35.962636 | orchestrator | 2026-03-19 00:19:35.962648 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-19 00:19:36.008875 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:19:36.009035 | orchestrator | 2026-03-19 00:19:36.009051 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-19 00:20:36.059492 | orchestrator | Pausing for 60 seconds 2026-03-19 00:20:36.059638 | orchestrator | changed: [testbed-manager] 2026-03-19 00:20:36.059664 | orchestrator | 2026-03-19 00:20:36.059686 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-19 00:20:38.584409 | orchestrator | changed: [testbed-manager] 2026-03-19 00:20:38.584519 | orchestrator | 2026-03-19 00:20:38.584536 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-19 00:21:20.050625 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-19 00:21:20.050753 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-19 00:21:20.050771 | orchestrator | changed: [testbed-manager] 2026-03-19 00:21:20.050817 | orchestrator | 2026-03-19 00:21:20.050830 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-19 00:21:25.541985 | orchestrator | changed: [testbed-manager] 2026-03-19 00:21:25.542155 | orchestrator | 2026-03-19 00:21:25.542173 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-19 00:21:25.628178 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-19 00:21:25.628289 | orchestrator | 2026-03-19 00:21:25.628305 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-19 00:21:25.628318 | orchestrator | 2026-03-19 00:21:25.628329 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-19 00:21:25.686117 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:21:25.686212 | orchestrator | 2026-03-19 00:21:25.686225 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-19 00:21:25.751326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-19 00:21:25.751433 | orchestrator | 2026-03-19 00:21:25.751449 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-19 00:21:26.502205 | orchestrator | changed: [testbed-manager] 2026-03-19 00:21:26.502314 | orchestrator | 2026-03-19 00:21:26.502330 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-19 00:21:29.571329 | orchestrator | ok: [testbed-manager] 2026-03-19 00:21:29.571420 | orchestrator | 2026-03-19 00:21:29.571430 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-19 00:21:29.645592 | orchestrator | ok: [testbed-manager] => { 2026-03-19 00:21:29.645696 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-19 00:21:29.645711 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-19 00:21:29.645723 | orchestrator | "Checking running containers against expected versions...", 2026-03-19 00:21:29.645736 | orchestrator | "", 2026-03-19 00:21:29.645749 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-19 00:21:29.645760 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-19 00:21:29.645772 | orchestrator | " Enabled: true", 2026-03-19 00:21:29.645783 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-19 00:21:29.645794 | orchestrator | " Status: ✅ MATCH", 2026-03-19 00:21:29.645805 | orchestrator | "", 2026-03-19 00:21:29.645815 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-19 00:21:29.645826 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-19 00:21:29.645837 | orchestrator | " Enabled: true", 2026-03-19 00:21:29.645848 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-19 00:21:29.645859 | orchestrator | " Status: ✅ MATCH", 2026-03-19 00:21:29.645870 | orchestrator | "", 2026-03-19 00:21:29.645880 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-19 00:21:29.645891 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-19 00:21:29.645902 | orchestrator | " Enabled: true", 2026-03-19 00:21:29.645991 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-19 00:21:29.646005 | orchestrator | " Status: ✅ MATCH", 2026-03-19 00:21:29.646072 | orchestrator | "", 2026-03-19 00:21:29.646087 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-19 00:21:29.646098 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-19 00:21:29.646109 | orchestrator | " Enabled: true", 2026-03-19 00:21:29.646120 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-19 00:21:29.646131 | orchestrator | " Status: ✅ MATCH", 2026-03-19 00:21:29.646142 | orchestrator | "", 2026-03-19 00:21:29.646153 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-19 00:21:29.646164 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-19 00:21:29.646206 | orchestrator | " Enabled: true", 2026-03-19 00:21:29.646218 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-19 00:21:29.646229 | orchestrator | " Status: ✅ MATCH", 2026-03-19 00:21:29.646239 | orchestrator | "", 2026-03-19 00:21:29.646250 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-19 00:21:29.646261 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-19 00:21:29.646272 | orchestrator | " Enabled: true", 2026-03-19 00:21:29.646283 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-19 00:21:29.646294 | orchestrator | " Status: ✅ MATCH", 2026-03-19 00:21:29.646304 | orchestrator | "", 2026-03-19 00:21:29.646315 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-19 00:21:29.646326 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-19 00:21:29.646337 | orchestrator | " Enabled: true", 2026-03-19 00:21:29.646347 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-19 00:21:29.646358 | orchestrator | " Status: ✅ MATCH", 2026-03-19 00:21:29.646369 | orchestrator | "", 2026-03-19 00:21:29.646379 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-19 00:21:29.646390 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-19 00:21:29.646401 | orchestrator | " Enabled: true", 2026-03-19 00:21:29.646412 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-19 00:21:29.646422 | orchestrator | " Status: ✅ MATCH", 2026-03-19 00:21:29.646433 | orchestrator | "", 2026-03-19 00:21:29.646454 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-19 00:21:29.646465 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-19 00:21:29.646482 | orchestrator | " Enabled: true", 2026-03-19 00:21:29.646493 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-19 00:21:29.646505 | orchestrator | " Status: ✅ MATCH", 2026-03-19 00:21:29.646515 | orchestrator | "", 2026-03-19 00:21:29.646526 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-19 00:21:29.646537 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-19 00:21:29.646548 | orchestrator | " Enabled: true", 2026-03-19 00:21:29.646559 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-19 00:21:29.646570 | orchestrator | " Status: ✅ MATCH", 2026-03-19 00:21:29.646581 | orchestrator | "", 2026-03-19 00:21:29.646592 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-19 00:21:29.646602 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-19 00:21:29.646613 | orchestrator | " Enabled: true", 2026-03-19 00:21:29.646624 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-19 00:21:29.646635 | orchestrator | " Status: ✅ MATCH", 2026-03-19 00:21:29.646646 | orchestrator | "", 2026-03-19 00:21:29.646656 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-19 00:21:29.646667 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-19 00:21:29.646678 | orchestrator | " Enabled: true", 2026-03-19 00:21:29.646689 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-19 00:21:29.646699 | orchestrator | " Status: ✅ MATCH", 2026-03-19 00:21:29.646710 | orchestrator | "", 2026-03-19 00:21:29.646721 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-19 00:21:29.646732 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-19 00:21:29.646742 | orchestrator | " Enabled: true", 2026-03-19 00:21:29.646753 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-19 00:21:29.646764 | orchestrator | " Status: ✅ MATCH", 2026-03-19 00:21:29.646774 | orchestrator | "", 2026-03-19 00:21:29.646785 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-19 00:21:29.646796 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-19 00:21:29.646807 | orchestrator | " Enabled: true", 2026-03-19 00:21:29.646818 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-19 00:21:29.646836 | orchestrator | " Status: ✅ MATCH", 2026-03-19 00:21:29.646847 | orchestrator | "", 2026-03-19 00:21:29.646858 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-19 00:21:29.646888 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-19 00:21:29.646900 | orchestrator | " Enabled: true", 2026-03-19 00:21:29.646911 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-19 00:21:29.646945 | orchestrator | " Status: ✅ MATCH", 2026-03-19 00:21:29.646956 | orchestrator | "", 2026-03-19 00:21:29.646967 | orchestrator | "=== Summary ===", 2026-03-19 00:21:29.646977 | orchestrator | "Errors (version mismatches): 0", 2026-03-19 00:21:29.646988 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-19 00:21:29.646999 | orchestrator | "", 2026-03-19 00:21:29.647010 | orchestrator | "✅ All running containers match expected versions!" 2026-03-19 00:21:29.647021 | orchestrator | ] 2026-03-19 00:21:29.647032 | orchestrator | } 2026-03-19 00:21:29.647043 | orchestrator | 2026-03-19 00:21:29.647054 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-19 00:21:29.706131 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:21:29.706244 | orchestrator | 2026-03-19 00:21:29.706259 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:21:29.706272 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-19 00:21:29.706284 | orchestrator | 2026-03-19 00:21:29.797392 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-19 00:21:29.797509 | orchestrator | + deactivate 2026-03-19 00:21:29.797535 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-19 00:21:29.797556 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-19 00:21:29.797572 | orchestrator | + export PATH 2026-03-19 00:21:29.797586 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-19 00:21:29.797603 | orchestrator | + '[' -n '' ']' 2026-03-19 00:21:29.797618 | orchestrator | + hash -r 2026-03-19 00:21:29.797647 | orchestrator | + '[' -n '' ']' 2026-03-19 00:21:29.797657 | orchestrator | + unset VIRTUAL_ENV 2026-03-19 00:21:29.797666 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-19 00:21:29.797675 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-19 00:21:29.797683 | orchestrator | + unset -f deactivate 2026-03-19 00:21:29.797693 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-19 00:21:29.807020 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-19 00:21:29.807098 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-19 00:21:29.807108 | orchestrator | + local max_attempts=60 2026-03-19 00:21:29.807115 | orchestrator | + local name=ceph-ansible 2026-03-19 00:21:29.807122 | orchestrator | + local attempt_num=1 2026-03-19 00:21:29.808930 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 00:21:29.844160 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-19 00:21:29.844243 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-19 00:21:29.844257 | orchestrator | + local max_attempts=60 2026-03-19 00:21:29.844358 | orchestrator | + local name=kolla-ansible 2026-03-19 00:21:29.844370 | orchestrator | + local attempt_num=1 2026-03-19 00:21:29.844391 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-19 00:21:29.873799 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-19 00:21:29.873872 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-19 00:21:29.873884 | orchestrator | + local max_attempts=60 2026-03-19 00:21:29.873896 | orchestrator | + local name=osism-ansible 2026-03-19 00:21:29.873907 | orchestrator | + local attempt_num=1 2026-03-19 00:21:29.874232 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-19 00:21:29.907452 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-19 00:21:29.907541 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-19 00:21:29.907556 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-19 00:21:30.568589 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-19 00:21:30.782227 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-19 00:21:30.782371 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-19 00:21:30.782389 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-19 00:21:30.782401 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-19 00:21:30.782414 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-03-19 00:21:30.782424 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-03-19 00:21:30.782435 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-03-19 00:21:30.782446 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2026-03-19 00:21:30.782475 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-03-19 00:21:30.782486 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-03-19 00:21:30.782497 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-03-19 00:21:30.782508 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-03-19 00:21:30.782518 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-19 00:21:30.782529 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-19 00:21:30.782540 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-19 00:21:30.782551 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-03-19 00:21:30.784950 | orchestrator | ++ semver latest 7.0.0 2026-03-19 00:21:30.832105 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-19 00:21:30.832191 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-19 00:21:30.832206 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-19 00:21:30.836058 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-19 00:21:43.318468 | orchestrator | 2026-03-19 00:21:43 | INFO  | Prepare task for execution of resolvconf. 2026-03-19 00:21:43.510503 | orchestrator | 2026-03-19 00:21:43 | INFO  | Task fbcd2db2-17df-4b32-ae14-9ae69737456e (resolvconf) was prepared for execution. 2026-03-19 00:21:43.510629 | orchestrator | 2026-03-19 00:21:43 | INFO  | It takes a moment until task fbcd2db2-17df-4b32-ae14-9ae69737456e (resolvconf) has been started and output is visible here. 2026-03-19 00:21:56.257285 | orchestrator | 2026-03-19 00:21:56.257417 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-19 00:21:56.257434 | orchestrator | 2026-03-19 00:21:56.257447 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 00:21:56.257459 | orchestrator | Thursday 19 March 2026 00:21:46 +0000 (0:00:00.183) 0:00:00.183 ******** 2026-03-19 00:21:56.257470 | orchestrator | ok: [testbed-manager] 2026-03-19 00:21:56.257481 | orchestrator | 2026-03-19 00:21:56.257493 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-19 00:21:56.257505 | orchestrator | Thursday 19 March 2026 00:21:50 +0000 (0:00:03.752) 0:00:03.935 ******** 2026-03-19 00:21:56.257516 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:21:56.257527 | orchestrator | 2026-03-19 00:21:56.257538 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-19 00:21:56.257548 | orchestrator | Thursday 19 March 2026 00:21:50 +0000 (0:00:00.063) 0:00:03.998 ******** 2026-03-19 00:21:56.257559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-19 00:21:56.257571 | orchestrator | 2026-03-19 00:21:56.257582 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-19 00:21:56.257593 | orchestrator | Thursday 19 March 2026 00:21:50 +0000 (0:00:00.074) 0:00:04.073 ******** 2026-03-19 00:21:56.257616 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 00:21:56.257627 | orchestrator | 2026-03-19 00:21:56.257638 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-19 00:21:56.257649 | orchestrator | Thursday 19 March 2026 00:21:50 +0000 (0:00:00.074) 0:00:04.147 ******** 2026-03-19 00:21:56.257660 | orchestrator | ok: [testbed-manager] 2026-03-19 00:21:56.257670 | orchestrator | 2026-03-19 00:21:56.257681 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-19 00:21:56.257692 | orchestrator | Thursday 19 March 2026 00:21:51 +0000 (0:00:01.125) 0:00:05.273 ******** 2026-03-19 00:21:56.257703 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:21:56.257714 | orchestrator | 2026-03-19 00:21:56.257724 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-19 00:21:56.257735 | orchestrator | Thursday 19 March 2026 00:21:51 +0000 (0:00:00.043) 0:00:05.317 ******** 2026-03-19 00:21:56.257746 | orchestrator | ok: [testbed-manager] 2026-03-19 00:21:56.257756 | orchestrator | 2026-03-19 00:21:56.257767 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-19 00:21:56.257777 | orchestrator | Thursday 19 March 2026 00:21:52 +0000 (0:00:00.544) 0:00:05.861 ******** 2026-03-19 00:21:56.257790 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:21:56.257803 | orchestrator | 2026-03-19 00:21:56.257816 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-19 00:21:56.257829 | orchestrator | Thursday 19 March 2026 00:21:52 +0000 (0:00:00.077) 0:00:05.938 ******** 2026-03-19 00:21:56.257842 | orchestrator | changed: [testbed-manager] 2026-03-19 00:21:56.257854 | orchestrator | 2026-03-19 00:21:56.257866 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-19 00:21:56.257879 | orchestrator | Thursday 19 March 2026 00:21:52 +0000 (0:00:00.600) 0:00:06.539 ******** 2026-03-19 00:21:56.257891 | orchestrator | changed: [testbed-manager] 2026-03-19 00:21:56.257904 | orchestrator | 2026-03-19 00:21:56.257961 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-19 00:21:56.257986 | orchestrator | Thursday 19 March 2026 00:21:53 +0000 (0:00:01.077) 0:00:07.616 ******** 2026-03-19 00:21:56.258012 | orchestrator | ok: [testbed-manager] 2026-03-19 00:21:56.258115 | orchestrator | 2026-03-19 00:21:56.258171 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-19 00:21:56.258191 | orchestrator | Thursday 19 March 2026 00:21:54 +0000 (0:00:01.001) 0:00:08.617 ******** 2026-03-19 00:21:56.258210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-19 00:21:56.258228 | orchestrator | 2026-03-19 00:21:56.258248 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-19 00:21:56.258267 | orchestrator | Thursday 19 March 2026 00:21:54 +0000 (0:00:00.080) 0:00:08.698 ******** 2026-03-19 00:21:56.258285 | orchestrator | changed: [testbed-manager] 2026-03-19 00:21:56.258303 | orchestrator | 2026-03-19 00:21:56.258321 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:21:56.258343 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 00:21:56.258361 | orchestrator | 2026-03-19 00:21:56.258381 | orchestrator | 2026-03-19 00:21:56.258400 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:21:56.258418 | orchestrator | Thursday 19 March 2026 00:21:56 +0000 (0:00:01.169) 0:00:09.867 ******** 2026-03-19 00:21:56.258437 | orchestrator | =============================================================================== 2026-03-19 00:21:56.258454 | orchestrator | Gathering Facts --------------------------------------------------------- 3.75s 2026-03-19 00:21:56.258473 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2026-03-19 00:21:56.258491 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.13s 2026-03-19 00:21:56.258510 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2026-03-19 00:21:56.258529 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.00s 2026-03-19 00:21:56.258547 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.60s 2026-03-19 00:21:56.258618 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.54s 2026-03-19 00:21:56.258638 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-03-19 00:21:56.258657 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-03-19 00:21:56.258676 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-03-19 00:21:56.258696 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-03-19 00:21:56.258708 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-03-19 00:21:56.258719 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.04s 2026-03-19 00:21:56.451497 | orchestrator | + osism apply sshconfig 2026-03-19 00:22:07.718588 | orchestrator | 2026-03-19 00:22:07 | INFO  | Prepare task for execution of sshconfig. 2026-03-19 00:22:07.802980 | orchestrator | 2026-03-19 00:22:07 | INFO  | Task 693492aa-5b4f-4e0d-9c47-6fa302ee067e (sshconfig) was prepared for execution. 2026-03-19 00:22:07.803079 | orchestrator | 2026-03-19 00:22:07 | INFO  | It takes a moment until task 693492aa-5b4f-4e0d-9c47-6fa302ee067e (sshconfig) has been started and output is visible here. 2026-03-19 00:22:18.633460 | orchestrator | 2026-03-19 00:22:18.633577 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-19 00:22:18.633591 | orchestrator | 2026-03-19 00:22:18.633601 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-19 00:22:18.633610 | orchestrator | Thursday 19 March 2026 00:22:10 +0000 (0:00:00.185) 0:00:00.185 ******** 2026-03-19 00:22:18.633619 | orchestrator | ok: [testbed-manager] 2026-03-19 00:22:18.633638 | orchestrator | 2026-03-19 00:22:18.633648 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-19 00:22:18.633657 | orchestrator | Thursday 19 March 2026 00:22:11 +0000 (0:00:00.896) 0:00:01.081 ******** 2026-03-19 00:22:18.633694 | orchestrator | changed: [testbed-manager] 2026-03-19 00:22:18.633704 | orchestrator | 2026-03-19 00:22:18.633712 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-19 00:22:18.633729 | orchestrator | Thursday 19 March 2026 00:22:12 +0000 (0:00:00.527) 0:00:01.609 ******** 2026-03-19 00:22:18.633738 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-19 00:22:18.633747 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-19 00:22:18.633756 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-19 00:22:18.633764 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-19 00:22:18.633773 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-19 00:22:18.633781 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-19 00:22:18.633790 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-19 00:22:18.633798 | orchestrator | 2026-03-19 00:22:18.633807 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-19 00:22:18.633815 | orchestrator | Thursday 19 March 2026 00:22:17 +0000 (0:00:05.569) 0:00:07.178 ******** 2026-03-19 00:22:18.633824 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:22:18.633832 | orchestrator | 2026-03-19 00:22:18.633841 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-19 00:22:18.633849 | orchestrator | Thursday 19 March 2026 00:22:17 +0000 (0:00:00.095) 0:00:07.274 ******** 2026-03-19 00:22:18.633858 | orchestrator | changed: [testbed-manager] 2026-03-19 00:22:18.633866 | orchestrator | 2026-03-19 00:22:18.633875 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:22:18.633886 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 00:22:18.633895 | orchestrator | 2026-03-19 00:22:18.633904 | orchestrator | 2026-03-19 00:22:18.633934 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:22:18.633943 | orchestrator | Thursday 19 March 2026 00:22:18 +0000 (0:00:00.525) 0:00:07.799 ******** 2026-03-19 00:22:18.633952 | orchestrator | =============================================================================== 2026-03-19 00:22:18.633960 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.57s 2026-03-19 00:22:18.633969 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.90s 2026-03-19 00:22:18.633977 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2026-03-19 00:22:18.633985 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.53s 2026-03-19 00:22:18.633994 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.10s 2026-03-19 00:22:18.760387 | orchestrator | + osism apply known-hosts 2026-03-19 00:22:29.971654 | orchestrator | 2026-03-19 00:22:29 | INFO  | Prepare task for execution of known-hosts. 2026-03-19 00:22:30.043780 | orchestrator | 2026-03-19 00:22:30 | INFO  | Task 0945c02a-c1df-4fbf-ae7d-3eab4ecc5646 (known-hosts) was prepared for execution. 2026-03-19 00:22:30.043881 | orchestrator | 2026-03-19 00:22:30 | INFO  | It takes a moment until task 0945c02a-c1df-4fbf-ae7d-3eab4ecc5646 (known-hosts) has been started and output is visible here. 2026-03-19 00:22:45.472634 | orchestrator | 2026-03-19 00:22:45.472702 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-19 00:22:45.472716 | orchestrator | 2026-03-19 00:22:45.472727 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-19 00:22:45.472740 | orchestrator | Thursday 19 March 2026 00:22:33 +0000 (0:00:00.195) 0:00:00.195 ******** 2026-03-19 00:22:45.472751 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-19 00:22:45.472762 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-19 00:22:45.472773 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-19 00:22:45.472799 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-19 00:22:45.472810 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-19 00:22:45.472821 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-19 00:22:45.472831 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-19 00:22:45.472842 | orchestrator | 2026-03-19 00:22:45.472853 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-19 00:22:45.472864 | orchestrator | Thursday 19 March 2026 00:22:39 +0000 (0:00:06.428) 0:00:06.624 ******** 2026-03-19 00:22:45.472884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-19 00:22:45.472898 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-19 00:22:45.472948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-19 00:22:45.472960 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-19 00:22:45.472971 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-19 00:22:45.472981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-19 00:22:45.472992 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-19 00:22:45.473003 | orchestrator | 2026-03-19 00:22:45.473014 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 00:22:45.473025 | orchestrator | Thursday 19 March 2026 00:22:39 +0000 (0:00:00.158) 0:00:06.782 ******** 2026-03-19 00:22:45.473039 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdM9jZJKhD5TNPT2JilBsLe2T5Nj54Tf/Cuk1UM5OKj8hr4+iCXIZf9Ja3ZWpGNh8mMd39hP7Hq/dg++MhgwIYqEkf8fjePzRxuNBDjN+xHnMoDva6smttphzf/Fp4dTjA1WnynLIQe3ApfCHnqAOKM5+38y7KcsQl+UgtxS+xDg7neN88GplWOePCo1NDdZt3rK9mFdxiZHEhXz+RxPo0QZyeRnf2JfloaBYcqPZCUXiWoCofJoLgivfzb7dCEj2dOhOPFRZYqQKbAleBSnz8ylCFsxskZlWO+PxHRZnUD3Pi9T5ofpl0c+1RQqVn65wXrT7nvclZYP8CYjxyuNyI0YdHBfwoHO9/ffOUNPt8V5d6nJVG8jr5Lnz3SgcjWQlR5tNQ34mOrE3W3hbsftJ+PxaYxeaopjO8ej1OhKyJBGcZQYCfAkGNED5HOJ1yqdboI9KQJEts0p5awLOK1VeBhR9ZyiwZkhvQnigltnpmrR1fdcJwTuHu0m00+2Wkq28=) 2026-03-19 00:22:45.473053 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG4acKV6U7QcM2M5KzxQRsr2AWh9aaOowV71RnCr8tEWaS7LunaWMdcJRxp9xD2RtFHc+yGdbJn61MEPtQOFqQc=) 2026-03-19 00:22:45.473067 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKXDeFenpJ+CDyXcAAUQwg+Fbjs6VMcXjQm9aDTej+9s) 2026-03-19 00:22:45.473080 | orchestrator | 2026-03-19 00:22:45.473091 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 00:22:45.473102 | orchestrator | Thursday 19 March 2026 00:22:41 +0000 (0:00:01.246) 0:00:08.028 ******** 2026-03-19 00:22:45.473131 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4o3VPl8UuBelSJ0muVo/7qyDNZmXg20PifG+wEwCET7mIuBWOSYgphwELNz6XrD3VUf72AYwGR554rI5iVrQieiHcn6jKCwH+0aRk3PXzZkeV37XLTAnqAusz1FqhxAOZAQYhDowkC5yqnx8EC9ik5EKoJeVvHvlqmAXLv3nG4ORLEldvg16z/o1GNiNZskrEVldZLZxw1piqzsiPEVmT+9lYV1TUGjjvFGN/QtAn/1BkE/Yt/IRRFJVSHY4KKNmJz+ilt1XOW+v4rn2gFmnVG6sN3B/YHFPer+RNPHtes/X3/Pwky2Gb11K/OzHoNPi1f8K5FfmQxcevkMK0yLDhQR8jTHMtt7/fNGoY3ndMx/67zxurYeBHBCzGgJTFEkjHj9+ZIexWqpR6ExOmzHS1l8F7La3h5pwzUoZkt9P8SgzL67JFcYD2VKIfSEOmcIzQRg4f+IInRDmsVXy96xtW0MXKUH4lMEv+WAXz6rxx3EoPNy3ch6X4giey/Jj3VLs=) 2026-03-19 00:22:45.473152 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPnu7XzJQjHDFTZ7qp9fShyYpkwJJJg9LpEap76LMFY//eI866y+P+TmJGJepJxIRk4k7KYpKeMyECOljPfBJGo=) 2026-03-19 00:22:45.473169 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDRnlgMbnT0kji5BjNK2v0lT06Ckj0Xloz/t/R/6xhEt) 2026-03-19 00:22:45.473188 | orchestrator | 2026-03-19 00:22:45.473203 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 00:22:45.473217 | orchestrator | Thursday 19 March 2026 00:22:42 +0000 (0:00:00.996) 0:00:09.024 ******** 2026-03-19 00:22:45.473230 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzFrOwdjFWF8JeUi+2DMa5HZnyD1M9RrTbhQcJ+nAqCk5ba0ILwjD6Fj6Ma/z7dt0d8g8z8hVIhWW4zHcLBggOlBkF73veBspQG1G65sDmpqVO9/Y+FsDQ9Yp22QBSAmO0ybZ7qhcVSdl7iDMZufxIepoNjlgPMEVMPh+mRC1V0u6iO8J+2soIEe+nGaopoQHPJnugm5BsGnbjiOGL4K0CsfyChgqaWWnFu+gNSZX9CmJtpbC+jIBn0F2mE+jnmuQd6zDZzc9mfwhWJjtc9ZYXncnsMOuX3MXMY3oL2VLdanwbkemJMv1TMqFYNsJW3h/TqEQus4BTmp4lZdmoG54LniiJ/CdG0oyyintWxi43y5ZEic4OA+Gvs3j+O+DGvXNyW87hgeWk/ge4urWvNZFrRrYnvtuakWSsmwPiUynGhaJDxHGOBzr3i+LfbF+m7xg3oHAqzMu6bwLKWXpXknjmY/GyazNqDzTLmuyvmVsKubuAfSSpoaHix8jlSlaNMVU=) 2026-03-19 00:22:45.473242 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNtfL7vMbGzoR0g7ZMg4KAim+OUZ0Y22tojGgEqGAglqiklK6OheAokxYTUNofVi77o9A7ci9u33X9mSSdQyNOk=) 2026-03-19 00:22:45.473327 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMcXAqlRRNH97eJ/ELVcZ64sM0+iUMAcTuiEXzJPBc8n) 2026-03-19 00:22:45.473341 | orchestrator | 2026-03-19 00:22:45.473353 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 00:22:45.473366 | orchestrator | Thursday 19 March 2026 00:22:43 +0000 (0:00:01.036) 0:00:10.061 ******** 2026-03-19 00:22:45.473379 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP4EOtTi47QBvX8s9Qt9+LPDhnna64YJiq1TeABtj7S7) 2026-03-19 00:22:45.473392 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcrff0aPBGGB2Azz0SaTyKdAvBl/76Oop+9n+Px/3Xq57g9cJQVBugANYR4YNX2v5E5+8LNWqLiJNw2frSEc0rVnJfa/m7nTFhGZMA0KNCGNJ6K0qYZ9wnb2jRuKjL5oawBsjDN99skiMqMDeWa+Fi90vIvb4dcWmUg+CxFT15WrbgcemOxkC0O8WGKTmhZ90tpyEBYe8+9/p7I1uRNJBLQ8bc7z9BG7hij8lhoiWJbAi9IqFz5W60Fa+jtJ+xGP8sS3uX9SlgJHHQ5DOqe2Z8pnVOY/eTcuLNC14mOax3XBSVYic9z+sS1z/qUwUXE1dJ1ZxW8Wut6mQcR5EVK0w2GR8kQMyvZfvCD3jsaXGfnQni69jmQq8K8nCHGozPrIGLalcfS5R+SJYiXRKjVsyQfxSnZb0lHCB0taRNUBia5ZvdDqcFu2M+KgvDjD1MS5KsZ1BITG0wuCdQCo/OqN6rejpeR9hRm+u4gm6nzUQ4RMoHALoz+iRFcb2c6EiXzWs=) 2026-03-19 00:22:45.473404 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKdyRl3UWtd+ev87qSd/W1ng+CeUtRn77CyP0cAzAKK8sAcMYCGwMJqZqIX4KXf+yJZ0Izw9/+N5qowMkKqVWDE=) 2026-03-19 00:22:45.473416 | orchestrator | 2026-03-19 00:22:45.473428 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 00:22:45.473440 | orchestrator | Thursday 19 March 2026 00:22:44 +0000 (0:00:01.041) 0:00:11.102 ******** 2026-03-19 00:22:45.473452 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7Wsmmt7ai0/PTUzE6YOrxe7Ee3WUxS3wO7dklWnO/c3xODN5Zt7eaHbHJjoo17dbLDh8Je5Aly55mSKejPkkI9Puc5BoAknLAXMPRHqLw/6NjuI40B608Jssky6T0Dcw0F4SgJVUJpNAembNtQNheqFYZVbJBn65rQnBI1uBDglnGeb+mgLWC4oa3AH1iwg4N0PFIP6RexxSLSFuFm+jl60zPgpMWRbaq7zh9JTGwKSoCalyQYKb4xFNEue/XlZPL3qUJQWBA5It4KTDrCeG7+IeyG8uXur6lXOxuBcH7eRiujlhSkjNna96h51C9mCv1H/MgEZgEaLld4QXuysP8EHWPSerEA2EUKplg2Pwn8hnLlzUiBzzXW6I0cD9Zp0psRkA1p9QlOUqqTxsEaZ3m+s+PusyefQm0JUC0oFi/c8VdUKFsIFAeG2KyfckOzVexV6Hu1bWjNuZyxtbT0kXcHf/x7guLyO9LT7LCnnvgIC37Wna2JLMW+OypgrMNv0U=) 2026-03-19 00:22:45.473471 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCbpQ90VXIBqhFoX6jLlO6Zwpu7PpqX5IbUTEkoJJKtf42gWm2XVbiCFebphTCzAJCZqulwP9BOy27+D3gc2rsw=) 2026-03-19 00:22:45.473484 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIACQFhpbRND/WtT3egTx/uVP8ZI7/p42IxPuDEDojNdW) 2026-03-19 00:22:45.473496 | orchestrator | 2026-03-19 00:22:45.473509 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 00:22:45.473521 | orchestrator | Thursday 19 March 2026 00:22:45 +0000 (0:00:01.017) 0:00:12.120 ******** 2026-03-19 00:22:45.473540 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD/1plGzDBkp9P0rx4K4he+wa3gxxys1LwzCIdTZsIUZ8A2uHd5dr4Qn0dOojOqzImryAkZtpdgcS9OSK9OPQYU=) 2026-03-19 00:22:56.098192 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK+xu6zOClUZOteppDgvpRyB1tfk4hMllR618Z8bI8xg) 2026-03-19 00:22:56.098323 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVtfV6iUvj1iEWcOd3BhQFUHKQJTHMz6ZSWZCbCrnFbeZb+VBBnEYF+IAfRvpBO4LdPicPSjcNhIhILDWDq4h2TGcEkmK32PWSyJuAjhAzwGBemn4of1WQYoGhXdqx11zhK1CgteVHO9rAn3ycukaUFvyrtyhph6A5kco2zGenqxMF8bhyGrwrQ2ZKj2zglQHpVbX8WfKzSnN9qxVJPetuKTVWAF1f3cX1E7oQldULnz4qDqAoK6LglQsl/latW7SBHCCqWxelLI4x6zXq/SIfaHKa082wRJxHz3cUrJHlwKirBAF/u6VqPOjm2VdXI7iKoHj+w1eI6UjqkTRN9YNpHjxmVIE3Nj7nUuBB2zkxksTPLwF61Dijt0NisWGSbZ0vlZ4DiyDwnLlPndX1K5eLjB3ugoeSPHAWuR53yEGJzRUhEMjQ5URMEgtDL1WOOM2bSgDQmQFyHXAUZ9WYAo2f9cjAib6nSiohjTl/5pbsM5y3Oc2teDbJfrW7r9GwnO8=) 2026-03-19 00:22:56.098343 | orchestrator | 2026-03-19 00:22:56.098357 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 00:22:56.098370 | orchestrator | Thursday 19 March 2026 00:22:46 +0000 (0:00:01.018) 0:00:13.138 ******** 2026-03-19 00:22:56.098381 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPtA1imzf25ncRat/veqgcKzaRalpVfqKRwcN4iDp2QJgdpNMoagBMa0xBlbL36yaa1UXFdxGFyA/gwUVYwry0U=) 2026-03-19 00:22:56.098395 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0IZVP3a/syNqflrug9B0Qyko3W8Uy5U/l8SeLZfe/celLQbmLH1r3b+kOB0Gl//zPH4dzBsY4Rq0b7/hZyiKI2HVm0rYUNcj2gMKg5IITdjHWPjGqyuPf/DNf28qg/8V+zA9cXhEWDTcp1+5UZsYCnNvdER0w4q+sMZsiZOSjKbmvH1c+K6n3YLugDJPWbMxKBJBxNJX5Z18PyPC1HBm27Dlsn1+wXnINYUBEtSIrJRwASjI9L2w1yVgvEdd/KfTNK6UuyU6VQYqspchEIw2R+YzgxEsu0XejaYu7PpTgFun8ugyQNFA3QbjbA4AH2Ehu/v0dX4f83qB53gZM44vjXTWwRKSAWIf8Wyaa6xAs/0xHZaJzZt/x/YNQnDRBY6ak4zM25B3g+OJrORYHjNhnbjDxKi40Zyd9ucBUCB0jseYtut2a/UadQ887n+kiyESmrTVySyRnFQ6AfdaZ88mkuapq0rLVQljMy49fK2YvAALS+Ex6+bH3yCzmecjkjnE=) 2026-03-19 00:22:56.098406 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILUtDHp1w1Hjm5EUYFERhQao5Rj78NjUf4+LH1kqyzh7) 2026-03-19 00:22:56.098418 | orchestrator | 2026-03-19 00:22:56.098429 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-19 00:22:56.098441 | orchestrator | Thursday 19 March 2026 00:22:47 +0000 (0:00:01.056) 0:00:14.195 ******** 2026-03-19 00:22:56.098452 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-19 00:22:56.098464 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-19 00:22:56.098475 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-19 00:22:56.098486 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-19 00:22:56.098497 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-19 00:22:56.098530 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-19 00:22:56.098541 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-19 00:22:56.098578 | orchestrator | 2026-03-19 00:22:56.098590 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-19 00:22:56.098602 | orchestrator | Thursday 19 March 2026 00:22:52 +0000 (0:00:05.240) 0:00:19.435 ******** 2026-03-19 00:22:56.098614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-19 00:22:56.098626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-19 00:22:56.098640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-19 00:22:56.098653 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-19 00:22:56.098666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-19 00:22:56.098679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-19 00:22:56.098692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-19 00:22:56.098705 | orchestrator | 2026-03-19 00:22:56.098734 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 00:22:56.098748 | orchestrator | Thursday 19 March 2026 00:22:52 +0000 (0:00:00.154) 0:00:19.589 ******** 2026-03-19 00:22:56.098760 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG4acKV6U7QcM2M5KzxQRsr2AWh9aaOowV71RnCr8tEWaS7LunaWMdcJRxp9xD2RtFHc+yGdbJn61MEPtQOFqQc=) 2026-03-19 00:22:56.098774 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdM9jZJKhD5TNPT2JilBsLe2T5Nj54Tf/Cuk1UM5OKj8hr4+iCXIZf9Ja3ZWpGNh8mMd39hP7Hq/dg++MhgwIYqEkf8fjePzRxuNBDjN+xHnMoDva6smttphzf/Fp4dTjA1WnynLIQe3ApfCHnqAOKM5+38y7KcsQl+UgtxS+xDg7neN88GplWOePCo1NDdZt3rK9mFdxiZHEhXz+RxPo0QZyeRnf2JfloaBYcqPZCUXiWoCofJoLgivfzb7dCEj2dOhOPFRZYqQKbAleBSnz8ylCFsxskZlWO+PxHRZnUD3Pi9T5ofpl0c+1RQqVn65wXrT7nvclZYP8CYjxyuNyI0YdHBfwoHO9/ffOUNPt8V5d6nJVG8jr5Lnz3SgcjWQlR5tNQ34mOrE3W3hbsftJ+PxaYxeaopjO8ej1OhKyJBGcZQYCfAkGNED5HOJ1yqdboI9KQJEts0p5awLOK1VeBhR9ZyiwZkhvQnigltnpmrR1fdcJwTuHu0m00+2Wkq28=) 2026-03-19 00:22:56.098788 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKXDeFenpJ+CDyXcAAUQwg+Fbjs6VMcXjQm9aDTej+9s) 2026-03-19 00:22:56.098800 | orchestrator | 2026-03-19 00:22:56.098813 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 00:22:56.098825 | orchestrator | Thursday 19 March 2026 00:22:53 +0000 (0:00:01.028) 0:00:20.618 ******** 2026-03-19 00:22:56.098838 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPnu7XzJQjHDFTZ7qp9fShyYpkwJJJg9LpEap76LMFY//eI866y+P+TmJGJepJxIRk4k7KYpKeMyECOljPfBJGo=) 2026-03-19 00:22:56.098851 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4o3VPl8UuBelSJ0muVo/7qyDNZmXg20PifG+wEwCET7mIuBWOSYgphwELNz6XrD3VUf72AYwGR554rI5iVrQieiHcn6jKCwH+0aRk3PXzZkeV37XLTAnqAusz1FqhxAOZAQYhDowkC5yqnx8EC9ik5EKoJeVvHvlqmAXLv3nG4ORLEldvg16z/o1GNiNZskrEVldZLZxw1piqzsiPEVmT+9lYV1TUGjjvFGN/QtAn/1BkE/Yt/IRRFJVSHY4KKNmJz+ilt1XOW+v4rn2gFmnVG6sN3B/YHFPer+RNPHtes/X3/Pwky2Gb11K/OzHoNPi1f8K5FfmQxcevkMK0yLDhQR8jTHMtt7/fNGoY3ndMx/67zxurYeBHBCzGgJTFEkjHj9+ZIexWqpR6ExOmzHS1l8F7La3h5pwzUoZkt9P8SgzL67JFcYD2VKIfSEOmcIzQRg4f+IInRDmsVXy96xtW0MXKUH4lMEv+WAXz6rxx3EoPNy3ch6X4giey/Jj3VLs=) 2026-03-19 00:22:56.098872 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDRnlgMbnT0kji5BjNK2v0lT06Ckj0Xloz/t/R/6xhEt) 2026-03-19 00:22:56.098884 | orchestrator | 2026-03-19 00:22:56.098896 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 00:22:56.098954 | orchestrator | Thursday 19 March 2026 00:22:54 +0000 (0:00:01.061) 0:00:21.679 ******** 2026-03-19 00:22:56.098968 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMcXAqlRRNH97eJ/ELVcZ64sM0+iUMAcTuiEXzJPBc8n) 2026-03-19 00:22:56.098982 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzFrOwdjFWF8JeUi+2DMa5HZnyD1M9RrTbhQcJ+nAqCk5ba0ILwjD6Fj6Ma/z7dt0d8g8z8hVIhWW4zHcLBggOlBkF73veBspQG1G65sDmpqVO9/Y+FsDQ9Yp22QBSAmO0ybZ7qhcVSdl7iDMZufxIepoNjlgPMEVMPh+mRC1V0u6iO8J+2soIEe+nGaopoQHPJnugm5BsGnbjiOGL4K0CsfyChgqaWWnFu+gNSZX9CmJtpbC+jIBn0F2mE+jnmuQd6zDZzc9mfwhWJjtc9ZYXncnsMOuX3MXMY3oL2VLdanwbkemJMv1TMqFYNsJW3h/TqEQus4BTmp4lZdmoG54LniiJ/CdG0oyyintWxi43y5ZEic4OA+Gvs3j+O+DGvXNyW87hgeWk/ge4urWvNZFrRrYnvtuakWSsmwPiUynGhaJDxHGOBzr3i+LfbF+m7xg3oHAqzMu6bwLKWXpXknjmY/GyazNqDzTLmuyvmVsKubuAfSSpoaHix8jlSlaNMVU=) 2026-03-19 00:22:56.098994 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNtfL7vMbGzoR0g7ZMg4KAim+OUZ0Y22tojGgEqGAglqiklK6OheAokxYTUNofVi77o9A7ci9u33X9mSSdQyNOk=) 2026-03-19 00:22:56.099005 | orchestrator | 2026-03-19 00:22:56.099016 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 00:22:56.099028 | orchestrator | Thursday 19 March 2026 00:22:55 +0000 (0:00:01.062) 0:00:22.741 ******** 2026-03-19 00:22:56.099056 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcrff0aPBGGB2Azz0SaTyKdAvBl/76Oop+9n+Px/3Xq57g9cJQVBugANYR4YNX2v5E5+8LNWqLiJNw2frSEc0rVnJfa/m7nTFhGZMA0KNCGNJ6K0qYZ9wnb2jRuKjL5oawBsjDN99skiMqMDeWa+Fi90vIvb4dcWmUg+CxFT15WrbgcemOxkC0O8WGKTmhZ90tpyEBYe8+9/p7I1uRNJBLQ8bc7z9BG7hij8lhoiWJbAi9IqFz5W60Fa+jtJ+xGP8sS3uX9SlgJHHQ5DOqe2Z8pnVOY/eTcuLNC14mOax3XBSVYic9z+sS1z/qUwUXE1dJ1ZxW8Wut6mQcR5EVK0w2GR8kQMyvZfvCD3jsaXGfnQni69jmQq8K8nCHGozPrIGLalcfS5R+SJYiXRKjVsyQfxSnZb0lHCB0taRNUBia5ZvdDqcFu2M+KgvDjD1MS5KsZ1BITG0wuCdQCo/OqN6rejpeR9hRm+u4gm6nzUQ4RMoHALoz+iRFcb2c6EiXzWs=) 2026-03-19 00:23:00.771149 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKdyRl3UWtd+ev87qSd/W1ng+CeUtRn77CyP0cAzAKK8sAcMYCGwMJqZqIX4KXf+yJZ0Izw9/+N5qowMkKqVWDE=) 2026-03-19 00:23:00.771257 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP4EOtTi47QBvX8s9Qt9+LPDhnna64YJiq1TeABtj7S7) 2026-03-19 00:23:00.771274 | orchestrator | 2026-03-19 00:23:00.771288 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 00:23:00.771301 | orchestrator | Thursday 19 March 2026 00:22:56 +0000 (0:00:01.055) 0:00:23.797 ******** 2026-03-19 00:23:00.771334 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7Wsmmt7ai0/PTUzE6YOrxe7Ee3WUxS3wO7dklWnO/c3xODN5Zt7eaHbHJjoo17dbLDh8Je5Aly55mSKejPkkI9Puc5BoAknLAXMPRHqLw/6NjuI40B608Jssky6T0Dcw0F4SgJVUJpNAembNtQNheqFYZVbJBn65rQnBI1uBDglnGeb+mgLWC4oa3AH1iwg4N0PFIP6RexxSLSFuFm+jl60zPgpMWRbaq7zh9JTGwKSoCalyQYKb4xFNEue/XlZPL3qUJQWBA5It4KTDrCeG7+IeyG8uXur6lXOxuBcH7eRiujlhSkjNna96h51C9mCv1H/MgEZgEaLld4QXuysP8EHWPSerEA2EUKplg2Pwn8hnLlzUiBzzXW6I0cD9Zp0psRkA1p9QlOUqqTxsEaZ3m+s+PusyefQm0JUC0oFi/c8VdUKFsIFAeG2KyfckOzVexV6Hu1bWjNuZyxtbT0kXcHf/x7guLyO9LT7LCnnvgIC37Wna2JLMW+OypgrMNv0U=) 2026-03-19 00:23:00.771349 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIACQFhpbRND/WtT3egTx/uVP8ZI7/p42IxPuDEDojNdW) 2026-03-19 00:23:00.771386 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCbpQ90VXIBqhFoX6jLlO6Zwpu7PpqX5IbUTEkoJJKtf42gWm2XVbiCFebphTCzAJCZqulwP9BOy27+D3gc2rsw=) 2026-03-19 00:23:00.771398 | orchestrator | 2026-03-19 00:23:00.771409 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 00:23:00.771420 | orchestrator | Thursday 19 March 2026 00:22:57 +0000 (0:00:01.028) 0:00:24.825 ******** 2026-03-19 00:23:00.771431 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK+xu6zOClUZOteppDgvpRyB1tfk4hMllR618Z8bI8xg) 2026-03-19 00:23:00.771442 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVtfV6iUvj1iEWcOd3BhQFUHKQJTHMz6ZSWZCbCrnFbeZb+VBBnEYF+IAfRvpBO4LdPicPSjcNhIhILDWDq4h2TGcEkmK32PWSyJuAjhAzwGBemn4of1WQYoGhXdqx11zhK1CgteVHO9rAn3ycukaUFvyrtyhph6A5kco2zGenqxMF8bhyGrwrQ2ZKj2zglQHpVbX8WfKzSnN9qxVJPetuKTVWAF1f3cX1E7oQldULnz4qDqAoK6LglQsl/latW7SBHCCqWxelLI4x6zXq/SIfaHKa082wRJxHz3cUrJHlwKirBAF/u6VqPOjm2VdXI7iKoHj+w1eI6UjqkTRN9YNpHjxmVIE3Nj7nUuBB2zkxksTPLwF61Dijt0NisWGSbZ0vlZ4DiyDwnLlPndX1K5eLjB3ugoeSPHAWuR53yEGJzRUhEMjQ5URMEgtDL1WOOM2bSgDQmQFyHXAUZ9WYAo2f9cjAib6nSiohjTl/5pbsM5y3Oc2teDbJfrW7r9GwnO8=) 2026-03-19 00:23:00.771454 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD/1plGzDBkp9P0rx4K4he+wa3gxxys1LwzCIdTZsIUZ8A2uHd5dr4Qn0dOojOqzImryAkZtpdgcS9OSK9OPQYU=) 2026-03-19 00:23:00.771465 | orchestrator | 2026-03-19 00:23:00.771476 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 00:23:00.771487 | orchestrator | Thursday 19 March 2026 00:22:58 +0000 (0:00:01.021) 0:00:25.847 ******** 2026-03-19 00:23:00.771498 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0IZVP3a/syNqflrug9B0Qyko3W8Uy5U/l8SeLZfe/celLQbmLH1r3b+kOB0Gl//zPH4dzBsY4Rq0b7/hZyiKI2HVm0rYUNcj2gMKg5IITdjHWPjGqyuPf/DNf28qg/8V+zA9cXhEWDTcp1+5UZsYCnNvdER0w4q+sMZsiZOSjKbmvH1c+K6n3YLugDJPWbMxKBJBxNJX5Z18PyPC1HBm27Dlsn1+wXnINYUBEtSIrJRwASjI9L2w1yVgvEdd/KfTNK6UuyU6VQYqspchEIw2R+YzgxEsu0XejaYu7PpTgFun8ugyQNFA3QbjbA4AH2Ehu/v0dX4f83qB53gZM44vjXTWwRKSAWIf8Wyaa6xAs/0xHZaJzZt/x/YNQnDRBY6ak4zM25B3g+OJrORYHjNhnbjDxKi40Zyd9ucBUCB0jseYtut2a/UadQ887n+kiyESmrTVySyRnFQ6AfdaZ88mkuapq0rLVQljMy49fK2YvAALS+Ex6+bH3yCzmecjkjnE=) 2026-03-19 00:23:00.771509 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPtA1imzf25ncRat/veqgcKzaRalpVfqKRwcN4iDp2QJgdpNMoagBMa0xBlbL36yaa1UXFdxGFyA/gwUVYwry0U=) 2026-03-19 00:23:00.771520 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILUtDHp1w1Hjm5EUYFERhQao5Rj78NjUf4+LH1kqyzh7) 2026-03-19 00:23:00.771531 | orchestrator | 2026-03-19 00:23:00.771542 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-19 00:23:00.771552 | orchestrator | Thursday 19 March 2026 00:22:59 +0000 (0:00:01.012) 0:00:26.860 ******** 2026-03-19 00:23:00.771564 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-19 00:23:00.771575 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-19 00:23:00.771605 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-19 00:23:00.771617 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-19 00:23:00.771628 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-19 00:23:00.771639 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-19 00:23:00.771649 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-19 00:23:00.771660 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:23:00.771671 | orchestrator | 2026-03-19 00:23:00.771684 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-19 00:23:00.771696 | orchestrator | Thursday 19 March 2026 00:23:00 +0000 (0:00:00.162) 0:00:27.022 ******** 2026-03-19 00:23:00.771717 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:23:00.771729 | orchestrator | 2026-03-19 00:23:00.771741 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-19 00:23:00.771754 | orchestrator | Thursday 19 March 2026 00:23:00 +0000 (0:00:00.058) 0:00:27.080 ******** 2026-03-19 00:23:00.771766 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:23:00.771778 | orchestrator | 2026-03-19 00:23:00.771790 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-19 00:23:00.771803 | orchestrator | Thursday 19 March 2026 00:23:00 +0000 (0:00:00.052) 0:00:27.133 ******** 2026-03-19 00:23:00.771815 | orchestrator | changed: [testbed-manager] 2026-03-19 00:23:00.771827 | orchestrator | 2026-03-19 00:23:00.771840 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:23:00.771853 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 00:23:00.771866 | orchestrator | 2026-03-19 00:23:00.771878 | orchestrator | 2026-03-19 00:23:00.771891 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:23:00.771926 | orchestrator | Thursday 19 March 2026 00:23:00 +0000 (0:00:00.471) 0:00:27.605 ******** 2026-03-19 00:23:00.771938 | orchestrator | =============================================================================== 2026-03-19 00:23:00.771948 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.43s 2026-03-19 00:23:00.771959 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.24s 2026-03-19 00:23:00.771970 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.25s 2026-03-19 00:23:00.771981 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-19 00:23:00.771992 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-19 00:23:00.772002 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-19 00:23:00.772013 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-19 00:23:00.772024 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-19 00:23:00.772035 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-19 00:23:00.772045 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-19 00:23:00.772056 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-19 00:23:00.772073 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-19 00:23:00.772085 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-19 00:23:00.772095 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-19 00:23:00.772106 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-19 00:23:00.772117 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-19 00:23:00.772127 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.47s 2026-03-19 00:23:00.772138 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-03-19 00:23:00.772149 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-03-19 00:23:00.772160 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.15s 2026-03-19 00:23:00.934703 | orchestrator | + osism apply squid 2026-03-19 00:23:12.209304 | orchestrator | 2026-03-19 00:23:12 | INFO  | Prepare task for execution of squid. 2026-03-19 00:23:12.284552 | orchestrator | 2026-03-19 00:23:12 | INFO  | Task 3f51e1d4-be86-424c-80fd-2ce4d8341a8d (squid) was prepared for execution. 2026-03-19 00:23:12.284653 | orchestrator | 2026-03-19 00:23:12 | INFO  | It takes a moment until task 3f51e1d4-be86-424c-80fd-2ce4d8341a8d (squid) has been started and output is visible here. 2026-03-19 00:25:04.926410 | orchestrator | 2026-03-19 00:25:04.926525 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-19 00:25:04.926541 | orchestrator | 2026-03-19 00:25:04.926553 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-19 00:25:04.926564 | orchestrator | Thursday 19 March 2026 00:23:15 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-03-19 00:25:04.926574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 00:25:04.926586 | orchestrator | 2026-03-19 00:25:04.926596 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-19 00:25:04.926605 | orchestrator | Thursday 19 March 2026 00:23:15 +0000 (0:00:00.077) 0:00:00.265 ******** 2026-03-19 00:25:04.926615 | orchestrator | ok: [testbed-manager] 2026-03-19 00:25:04.926626 | orchestrator | 2026-03-19 00:25:04.926635 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-19 00:25:04.926645 | orchestrator | Thursday 19 March 2026 00:23:17 +0000 (0:00:02.276) 0:00:02.542 ******** 2026-03-19 00:25:04.926655 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-19 00:25:04.926665 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-19 00:25:04.926675 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-19 00:25:04.926685 | orchestrator | 2026-03-19 00:25:04.926695 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-19 00:25:04.926704 | orchestrator | Thursday 19 March 2026 00:23:18 +0000 (0:00:01.265) 0:00:03.807 ******** 2026-03-19 00:25:04.926714 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-19 00:25:04.926723 | orchestrator | 2026-03-19 00:25:04.926733 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-19 00:25:04.926743 | orchestrator | Thursday 19 March 2026 00:23:20 +0000 (0:00:01.039) 0:00:04.847 ******** 2026-03-19 00:25:04.926752 | orchestrator | ok: [testbed-manager] 2026-03-19 00:25:04.926762 | orchestrator | 2026-03-19 00:25:04.926771 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-19 00:25:04.926802 | orchestrator | Thursday 19 March 2026 00:23:20 +0000 (0:00:00.339) 0:00:05.186 ******** 2026-03-19 00:25:04.926812 | orchestrator | changed: [testbed-manager] 2026-03-19 00:25:04.926822 | orchestrator | 2026-03-19 00:25:04.926832 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-19 00:25:04.926841 | orchestrator | Thursday 19 March 2026 00:23:21 +0000 (0:00:00.890) 0:00:06.077 ******** 2026-03-19 00:25:04.926851 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-19 00:25:04.926862 | orchestrator | ok: [testbed-manager] 2026-03-19 00:25:04.926871 | orchestrator | 2026-03-19 00:25:04.926881 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-19 00:25:04.926952 | orchestrator | Thursday 19 March 2026 00:23:52 +0000 (0:00:30.788) 0:00:36.865 ******** 2026-03-19 00:25:04.926971 | orchestrator | changed: [testbed-manager] 2026-03-19 00:25:04.926988 | orchestrator | 2026-03-19 00:25:04.927002 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-19 00:25:04.927015 | orchestrator | Thursday 19 March 2026 00:24:04 +0000 (0:00:12.005) 0:00:48.871 ******** 2026-03-19 00:25:04.927027 | orchestrator | Pausing for 60 seconds 2026-03-19 00:25:04.927040 | orchestrator | changed: [testbed-manager] 2026-03-19 00:25:04.927053 | orchestrator | 2026-03-19 00:25:04.927067 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-19 00:25:04.927079 | orchestrator | Thursday 19 March 2026 00:25:04 +0000 (0:01:00.070) 0:01:48.942 ******** 2026-03-19 00:25:04.927092 | orchestrator | ok: [testbed-manager] 2026-03-19 00:25:04.927105 | orchestrator | 2026-03-19 00:25:04.927118 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-19 00:25:04.927161 | orchestrator | Thursday 19 March 2026 00:25:04 +0000 (0:00:00.058) 0:01:49.000 ******** 2026-03-19 00:25:04.927174 | orchestrator | changed: [testbed-manager] 2026-03-19 00:25:04.927186 | orchestrator | 2026-03-19 00:25:04.927199 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:25:04.927212 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:25:04.927224 | orchestrator | 2026-03-19 00:25:04.927237 | orchestrator | 2026-03-19 00:25:04.927265 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:25:04.927278 | orchestrator | Thursday 19 March 2026 00:25:04 +0000 (0:00:00.575) 0:01:49.576 ******** 2026-03-19 00:25:04.927291 | orchestrator | =============================================================================== 2026-03-19 00:25:04.927303 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2026-03-19 00:25:04.927313 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.79s 2026-03-19 00:25:04.927324 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.01s 2026-03-19 00:25:04.927334 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.28s 2026-03-19 00:25:04.927345 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.27s 2026-03-19 00:25:04.927356 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.04s 2026-03-19 00:25:04.927366 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.89s 2026-03-19 00:25:04.927376 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.58s 2026-03-19 00:25:04.927387 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2026-03-19 00:25:04.927398 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-03-19 00:25:04.927409 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-03-19 00:25:05.098637 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-19 00:25:05.098749 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-19 00:25:05.102430 | orchestrator | + set -e 2026-03-19 00:25:05.102462 | orchestrator | + NAMESPACE=kolla 2026-03-19 00:25:05.102474 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-19 00:25:05.108344 | orchestrator | ++ semver latest 9.0.0 2026-03-19 00:25:05.164985 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-19 00:25:05.165098 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-19 00:25:05.165354 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-19 00:25:16.483514 | orchestrator | 2026-03-19 00:25:16 | INFO  | Prepare task for execution of operator. 2026-03-19 00:25:16.550487 | orchestrator | 2026-03-19 00:25:16 | INFO  | Task 0daa2c4a-c518-4b4c-b248-9ddc8a15a3d3 (operator) was prepared for execution. 2026-03-19 00:25:16.550584 | orchestrator | 2026-03-19 00:25:16 | INFO  | It takes a moment until task 0daa2c4a-c518-4b4c-b248-9ddc8a15a3d3 (operator) has been started and output is visible here. 2026-03-19 00:25:32.428800 | orchestrator | 2026-03-19 00:25:32.429000 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-19 00:25:32.429018 | orchestrator | 2026-03-19 00:25:32.429028 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 00:25:32.429049 | orchestrator | Thursday 19 March 2026 00:25:19 +0000 (0:00:00.196) 0:00:00.196 ******** 2026-03-19 00:25:32.429813 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:25:32.429836 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:25:32.429846 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:25:32.429855 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:25:32.429864 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:25:32.429872 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:25:32.429911 | orchestrator | 2026-03-19 00:25:32.429923 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-19 00:25:32.429959 | orchestrator | Thursday 19 March 2026 00:25:23 +0000 (0:00:04.346) 0:00:04.543 ******** 2026-03-19 00:25:32.429968 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:25:32.429976 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:25:32.429985 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:25:32.429993 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:25:32.430002 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:25:32.430011 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:25:32.430066 | orchestrator | 2026-03-19 00:25:32.430076 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-19 00:25:32.430085 | orchestrator | 2026-03-19 00:25:32.430094 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-19 00:25:32.430103 | orchestrator | Thursday 19 March 2026 00:25:24 +0000 (0:00:00.830) 0:00:05.373 ******** 2026-03-19 00:25:32.430111 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:25:32.430120 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:25:32.430128 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:25:32.430137 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:25:32.430145 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:25:32.430154 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:25:32.430162 | orchestrator | 2026-03-19 00:25:32.430171 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-19 00:25:32.430198 | orchestrator | Thursday 19 March 2026 00:25:24 +0000 (0:00:00.155) 0:00:05.529 ******** 2026-03-19 00:25:32.430207 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:25:32.430216 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:25:32.430224 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:25:32.430233 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:25:32.430241 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:25:32.430250 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:25:32.430258 | orchestrator | 2026-03-19 00:25:32.430267 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-19 00:25:32.430275 | orchestrator | Thursday 19 March 2026 00:25:25 +0000 (0:00:00.163) 0:00:05.692 ******** 2026-03-19 00:25:32.430285 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:25:32.430294 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:25:32.430303 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:25:32.430311 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:25:32.430320 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:25:32.430328 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:25:32.430337 | orchestrator | 2026-03-19 00:25:32.430345 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-19 00:25:32.430354 | orchestrator | Thursday 19 March 2026 00:25:25 +0000 (0:00:00.675) 0:00:06.367 ******** 2026-03-19 00:25:32.430363 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:25:32.430371 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:25:32.430380 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:25:32.430388 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:25:32.430397 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:25:32.430405 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:25:32.430414 | orchestrator | 2026-03-19 00:25:32.430423 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-19 00:25:32.430432 | orchestrator | Thursday 19 March 2026 00:25:26 +0000 (0:00:00.888) 0:00:07.256 ******** 2026-03-19 00:25:32.430440 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-19 00:25:32.430449 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-19 00:25:32.430458 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-19 00:25:32.430467 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-19 00:25:32.430475 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-19 00:25:32.430484 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-19 00:25:32.430492 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-19 00:25:32.430500 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-19 00:25:32.430509 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-19 00:25:32.430525 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-19 00:25:32.430534 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-19 00:25:32.430542 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-19 00:25:32.430551 | orchestrator | 2026-03-19 00:25:32.430559 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-19 00:25:32.430568 | orchestrator | Thursday 19 March 2026 00:25:27 +0000 (0:00:01.152) 0:00:08.408 ******** 2026-03-19 00:25:32.430577 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:25:32.430585 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:25:32.430594 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:25:32.430602 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:25:32.430611 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:25:32.430619 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:25:32.430628 | orchestrator | 2026-03-19 00:25:32.430637 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-19 00:25:32.430646 | orchestrator | Thursday 19 March 2026 00:25:29 +0000 (0:00:01.296) 0:00:09.704 ******** 2026-03-19 00:25:32.430654 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-19 00:25:32.430663 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-19 00:25:32.430672 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-19 00:25:32.430681 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-19 00:25:32.430689 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-19 00:25:32.430720 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-19 00:25:32.430729 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-19 00:25:32.430738 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-19 00:25:32.430747 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-19 00:25:32.430756 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-19 00:25:32.430764 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-19 00:25:32.430773 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-19 00:25:32.430781 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-19 00:25:32.430790 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-19 00:25:32.430799 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-19 00:25:32.430812 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-19 00:25:32.430821 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-19 00:25:32.430829 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-19 00:25:32.430838 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-19 00:25:32.430847 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-19 00:25:32.430855 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-19 00:25:32.430864 | orchestrator | 2026-03-19 00:25:32.430872 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-19 00:25:32.430899 | orchestrator | Thursday 19 March 2026 00:25:30 +0000 (0:00:01.339) 0:00:11.043 ******** 2026-03-19 00:25:32.430914 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:25:32.430929 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:25:32.430943 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:25:32.430959 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:25:32.430973 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:25:32.430987 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:25:32.430996 | orchestrator | 2026-03-19 00:25:32.431005 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-19 00:25:32.431021 | orchestrator | Thursday 19 March 2026 00:25:30 +0000 (0:00:00.150) 0:00:11.194 ******** 2026-03-19 00:25:32.431030 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:25:32.431039 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:25:32.431047 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:25:32.431056 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:25:32.431064 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:25:32.431073 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:25:32.431081 | orchestrator | 2026-03-19 00:25:32.431090 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-19 00:25:32.431099 | orchestrator | Thursday 19 March 2026 00:25:30 +0000 (0:00:00.176) 0:00:11.371 ******** 2026-03-19 00:25:32.431107 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:25:32.431116 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:25:32.431124 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:25:32.431132 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:25:32.431141 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:25:32.431149 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:25:32.431158 | orchestrator | 2026-03-19 00:25:32.431166 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-19 00:25:32.431175 | orchestrator | Thursday 19 March 2026 00:25:31 +0000 (0:00:00.546) 0:00:11.918 ******** 2026-03-19 00:25:32.431183 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:25:32.431192 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:25:32.431200 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:25:32.431209 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:25:32.431217 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:25:32.431226 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:25:32.431234 | orchestrator | 2026-03-19 00:25:32.431243 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-19 00:25:32.431251 | orchestrator | Thursday 19 March 2026 00:25:31 +0000 (0:00:00.188) 0:00:12.106 ******** 2026-03-19 00:25:32.431260 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-19 00:25:32.431269 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-19 00:25:32.431277 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:25:32.431286 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:25:32.431295 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-19 00:25:32.431303 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-19 00:25:32.431311 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:25:32.431320 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:25:32.431329 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-19 00:25:32.431338 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:25:32.431346 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 00:25:32.431354 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:25:32.431363 | orchestrator | 2026-03-19 00:25:32.431372 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-19 00:25:32.431380 | orchestrator | Thursday 19 March 2026 00:25:32 +0000 (0:00:00.737) 0:00:12.843 ******** 2026-03-19 00:25:32.431389 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:25:32.431397 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:25:32.431406 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:25:32.431415 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:25:32.431423 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:25:32.431431 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:25:32.431440 | orchestrator | 2026-03-19 00:25:32.431449 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-19 00:25:32.431457 | orchestrator | Thursday 19 March 2026 00:25:32 +0000 (0:00:00.118) 0:00:12.962 ******** 2026-03-19 00:25:32.431466 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:25:32.431474 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:25:32.431483 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:25:32.431491 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:25:32.431512 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:25:33.599548 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:25:33.599641 | orchestrator | 2026-03-19 00:25:33.599654 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-19 00:25:33.599667 | orchestrator | Thursday 19 March 2026 00:25:32 +0000 (0:00:00.117) 0:00:13.079 ******** 2026-03-19 00:25:33.599678 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:25:33.599689 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:25:33.599700 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:25:33.599711 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:25:33.599721 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:25:33.599732 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:25:33.599742 | orchestrator | 2026-03-19 00:25:33.599753 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-19 00:25:33.599764 | orchestrator | Thursday 19 March 2026 00:25:32 +0000 (0:00:00.132) 0:00:13.212 ******** 2026-03-19 00:25:33.599774 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:25:33.599785 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:25:33.599795 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:25:33.599806 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:25:33.599816 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:25:33.599827 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:25:33.599837 | orchestrator | 2026-03-19 00:25:33.599848 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-19 00:25:33.599859 | orchestrator | Thursday 19 March 2026 00:25:33 +0000 (0:00:00.637) 0:00:13.850 ******** 2026-03-19 00:25:33.599869 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:25:33.599879 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:25:33.599965 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:25:33.599977 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:25:33.599987 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:25:33.599998 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:25:33.600008 | orchestrator | 2026-03-19 00:25:33.600019 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:25:33.600063 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 00:25:33.600076 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 00:25:33.600087 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 00:25:33.600099 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 00:25:33.600110 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 00:25:33.600120 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 00:25:33.600131 | orchestrator | 2026-03-19 00:25:33.600141 | orchestrator | 2026-03-19 00:25:33.600152 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:25:33.600163 | orchestrator | Thursday 19 March 2026 00:25:33 +0000 (0:00:00.210) 0:00:14.060 ******** 2026-03-19 00:25:33.600174 | orchestrator | =============================================================================== 2026-03-19 00:25:33.600184 | orchestrator | Gathering Facts --------------------------------------------------------- 4.35s 2026-03-19 00:25:33.600195 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.34s 2026-03-19 00:25:33.600206 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.30s 2026-03-19 00:25:33.600243 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2026-03-19 00:25:33.600254 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.89s 2026-03-19 00:25:33.600265 | orchestrator | Do not require tty for all users ---------------------------------------- 0.83s 2026-03-19 00:25:33.600275 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2026-03-19 00:25:33.600286 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.68s 2026-03-19 00:25:33.600297 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2026-03-19 00:25:33.600307 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.55s 2026-03-19 00:25:33.600318 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2026-03-19 00:25:33.600329 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2026-03-19 00:25:33.600340 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.18s 2026-03-19 00:25:33.600351 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2026-03-19 00:25:33.600361 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-03-19 00:25:33.600372 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-03-19 00:25:33.600383 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2026-03-19 00:25:33.600393 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.12s 2026-03-19 00:25:33.600404 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.12s 2026-03-19 00:25:33.776220 | orchestrator | + osism apply --environment custom facts 2026-03-19 00:25:34.999630 | orchestrator | 2026-03-19 00:25:34 | INFO  | Trying to run play facts in environment custom 2026-03-19 00:25:45.138582 | orchestrator | 2026-03-19 00:25:45 | INFO  | Prepare task for execution of facts. 2026-03-19 00:25:45.216335 | orchestrator | 2026-03-19 00:25:45 | INFO  | Task 1b38dd82-4bcc-4f74-b336-d8a283ef3d66 (facts) was prepared for execution. 2026-03-19 00:25:45.216411 | orchestrator | 2026-03-19 00:25:45 | INFO  | It takes a moment until task 1b38dd82-4bcc-4f74-b336-d8a283ef3d66 (facts) has been started and output is visible here. 2026-03-19 00:26:29.088728 | orchestrator | 2026-03-19 00:26:29.088866 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-19 00:26:29.088910 | orchestrator | 2026-03-19 00:26:29.088925 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-19 00:26:29.088958 | orchestrator | Thursday 19 March 2026 00:25:48 +0000 (0:00:00.114) 0:00:00.114 ******** 2026-03-19 00:26:29.088973 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:26:29.088987 | orchestrator | ok: [testbed-manager] 2026-03-19 00:26:29.088999 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:26:29.089011 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:26:29.089023 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:26:29.089036 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:26:29.089048 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:26:29.089061 | orchestrator | 2026-03-19 00:26:29.089073 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-19 00:26:29.089085 | orchestrator | Thursday 19 March 2026 00:25:49 +0000 (0:00:01.453) 0:00:01.568 ******** 2026-03-19 00:26:29.089097 | orchestrator | ok: [testbed-manager] 2026-03-19 00:26:29.089109 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:26:29.089121 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:26:29.089134 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:26:29.089146 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:26:29.089159 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:26:29.089171 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:26:29.089183 | orchestrator | 2026-03-19 00:26:29.089222 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-19 00:26:29.089235 | orchestrator | 2026-03-19 00:26:29.089248 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-19 00:26:29.089260 | orchestrator | Thursday 19 March 2026 00:25:50 +0000 (0:00:01.175) 0:00:02.743 ******** 2026-03-19 00:26:29.089272 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:26:29.089285 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:26:29.089298 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:26:29.089310 | orchestrator | 2026-03-19 00:26:29.089323 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-19 00:26:29.089336 | orchestrator | Thursday 19 March 2026 00:25:50 +0000 (0:00:00.085) 0:00:02.828 ******** 2026-03-19 00:26:29.089349 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:26:29.089361 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:26:29.089373 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:26:29.089385 | orchestrator | 2026-03-19 00:26:29.089398 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-19 00:26:29.089411 | orchestrator | Thursday 19 March 2026 00:25:51 +0000 (0:00:00.196) 0:00:03.024 ******** 2026-03-19 00:26:29.089424 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:26:29.089436 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:26:29.089449 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:26:29.089461 | orchestrator | 2026-03-19 00:26:29.089473 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-19 00:26:29.089485 | orchestrator | Thursday 19 March 2026 00:25:51 +0000 (0:00:00.189) 0:00:03.214 ******** 2026-03-19 00:26:29.089499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:26:29.089512 | orchestrator | 2026-03-19 00:26:29.089525 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-19 00:26:29.089536 | orchestrator | Thursday 19 March 2026 00:25:51 +0000 (0:00:00.137) 0:00:03.351 ******** 2026-03-19 00:26:29.089547 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:26:29.089559 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:26:29.089570 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:26:29.089582 | orchestrator | 2026-03-19 00:26:29.089600 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-19 00:26:29.089612 | orchestrator | Thursday 19 March 2026 00:25:51 +0000 (0:00:00.470) 0:00:03.822 ******** 2026-03-19 00:26:29.089625 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:26:29.089637 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:26:29.089650 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:26:29.089660 | orchestrator | 2026-03-19 00:26:29.089670 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-19 00:26:29.089680 | orchestrator | Thursday 19 March 2026 00:25:52 +0000 (0:00:00.134) 0:00:03.957 ******** 2026-03-19 00:26:29.089690 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:26:29.089702 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:26:29.089713 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:26:29.089724 | orchestrator | 2026-03-19 00:26:29.089738 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-19 00:26:29.089749 | orchestrator | Thursday 19 March 2026 00:25:53 +0000 (0:00:01.076) 0:00:05.034 ******** 2026-03-19 00:26:29.089762 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:26:29.089775 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:26:29.089789 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:26:29.089801 | orchestrator | 2026-03-19 00:26:29.089812 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-19 00:26:29.089824 | orchestrator | Thursday 19 March 2026 00:25:53 +0000 (0:00:00.440) 0:00:05.474 ******** 2026-03-19 00:26:29.089836 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:26:29.089849 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:26:29.089862 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:26:29.089872 | orchestrator | 2026-03-19 00:26:29.089928 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-19 00:26:29.089941 | orchestrator | Thursday 19 March 2026 00:25:54 +0000 (0:00:01.080) 0:00:06.555 ******** 2026-03-19 00:26:29.089953 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:26:29.089964 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:26:29.089977 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:26:29.089990 | orchestrator | 2026-03-19 00:26:29.090002 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-19 00:26:29.090072 | orchestrator | Thursday 19 March 2026 00:26:11 +0000 (0:00:17.086) 0:00:23.642 ******** 2026-03-19 00:26:29.090091 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:26:29.090105 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:26:29.090117 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:26:29.090129 | orchestrator | 2026-03-19 00:26:29.090142 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-19 00:26:29.090177 | orchestrator | Thursday 19 March 2026 00:26:11 +0000 (0:00:00.102) 0:00:23.745 ******** 2026-03-19 00:26:29.090192 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:26:29.090204 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:26:29.090216 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:26:29.090228 | orchestrator | 2026-03-19 00:26:29.090240 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-19 00:26:29.090252 | orchestrator | Thursday 19 March 2026 00:26:19 +0000 (0:00:08.126) 0:00:31.872 ******** 2026-03-19 00:26:29.090263 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:26:29.090275 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:26:29.090286 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:26:29.090297 | orchestrator | 2026-03-19 00:26:29.090308 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-19 00:26:29.090319 | orchestrator | Thursday 19 March 2026 00:26:20 +0000 (0:00:00.458) 0:00:32.331 ******** 2026-03-19 00:26:29.090330 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-19 00:26:29.090342 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-19 00:26:29.090353 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-19 00:26:29.090364 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-19 00:26:29.090376 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-19 00:26:29.090387 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-19 00:26:29.090398 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-19 00:26:29.090408 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-19 00:26:29.090419 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-19 00:26:29.090430 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-19 00:26:29.090441 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-19 00:26:29.090452 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-19 00:26:29.090464 | orchestrator | 2026-03-19 00:26:29.090475 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-19 00:26:29.090487 | orchestrator | Thursday 19 March 2026 00:26:24 +0000 (0:00:03.655) 0:00:35.986 ******** 2026-03-19 00:26:29.090498 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:26:29.090509 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:26:29.090521 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:26:29.090532 | orchestrator | 2026-03-19 00:26:29.090542 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-19 00:26:29.090553 | orchestrator | 2026-03-19 00:26:29.090565 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-19 00:26:29.090624 | orchestrator | Thursday 19 March 2026 00:26:25 +0000 (0:00:01.263) 0:00:37.250 ******** 2026-03-19 00:26:29.090639 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:26:29.090659 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:26:29.090672 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:26:29.090683 | orchestrator | ok: [testbed-manager] 2026-03-19 00:26:29.090694 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:26:29.090705 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:26:29.090717 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:26:29.090728 | orchestrator | 2026-03-19 00:26:29.090739 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:26:29.090751 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:26:29.090763 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:26:29.090775 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:26:29.090787 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:26:29.090798 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:26:29.090810 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:26:29.090821 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:26:29.090832 | orchestrator | 2026-03-19 00:26:29.090843 | orchestrator | 2026-03-19 00:26:29.090855 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:26:29.090866 | orchestrator | Thursday 19 March 2026 00:26:29 +0000 (0:00:03.711) 0:00:40.961 ******** 2026-03-19 00:26:29.090893 | orchestrator | =============================================================================== 2026-03-19 00:26:29.090905 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.09s 2026-03-19 00:26:29.090916 | orchestrator | Install required packages (Debian) -------------------------------------- 8.13s 2026-03-19 00:26:29.090928 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.71s 2026-03-19 00:26:29.090938 | orchestrator | Copy fact files --------------------------------------------------------- 3.66s 2026-03-19 00:26:29.090949 | orchestrator | Create custom facts directory ------------------------------------------- 1.45s 2026-03-19 00:26:29.090961 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.26s 2026-03-19 00:26:29.090981 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2026-03-19 00:26:29.265968 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2026-03-19 00:26:29.266146 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.08s 2026-03-19 00:26:29.266162 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2026-03-19 00:26:29.266174 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-03-19 00:26:29.266185 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-03-19 00:26:29.266195 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2026-03-19 00:26:29.266206 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2026-03-19 00:26:29.266217 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-03-19 00:26:29.266228 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-03-19 00:26:29.266239 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-03-19 00:26:29.266250 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-03-19 00:26:29.429104 | orchestrator | + osism apply bootstrap 2026-03-19 00:26:40.697123 | orchestrator | 2026-03-19 00:26:40 | INFO  | Prepare task for execution of bootstrap. 2026-03-19 00:26:40.774489 | orchestrator | 2026-03-19 00:26:40 | INFO  | Task 867e045e-d148-4fdb-8389-f42b172b37fc (bootstrap) was prepared for execution. 2026-03-19 00:26:40.774586 | orchestrator | 2026-03-19 00:26:40 | INFO  | It takes a moment until task 867e045e-d148-4fdb-8389-f42b172b37fc (bootstrap) has been started and output is visible here. 2026-03-19 00:26:57.266087 | orchestrator | 2026-03-19 00:26:57.266208 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-19 00:26:57.266224 | orchestrator | 2026-03-19 00:26:57.266236 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-19 00:26:57.266254 | orchestrator | Thursday 19 March 2026 00:26:43 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-03-19 00:26:57.266273 | orchestrator | ok: [testbed-manager] 2026-03-19 00:26:57.266304 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:26:57.266323 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:26:57.266341 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:26:57.266359 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:26:57.266376 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:26:57.266393 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:26:57.266411 | orchestrator | 2026-03-19 00:26:57.266429 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-19 00:26:57.266448 | orchestrator | 2026-03-19 00:26:57.266467 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-19 00:26:57.266486 | orchestrator | Thursday 19 March 2026 00:26:44 +0000 (0:00:00.305) 0:00:00.493 ******** 2026-03-19 00:26:57.266505 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:26:57.266523 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:26:57.266542 | orchestrator | ok: [testbed-manager] 2026-03-19 00:26:57.266562 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:26:57.266581 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:26:57.266601 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:26:57.266620 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:26:57.266639 | orchestrator | 2026-03-19 00:26:57.266658 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-19 00:26:57.266677 | orchestrator | 2026-03-19 00:26:57.266697 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-19 00:26:57.266717 | orchestrator | Thursday 19 March 2026 00:26:49 +0000 (0:00:05.924) 0:00:06.417 ******** 2026-03-19 00:26:57.266737 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-19 00:26:57.266759 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-19 00:26:57.266778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-19 00:26:57.266797 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-19 00:26:57.266816 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 00:26:57.266836 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-19 00:26:57.266855 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 00:26:57.266902 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-19 00:26:57.266924 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 00:26:57.266943 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-19 00:26:57.266962 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-19 00:26:57.266981 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-19 00:26:57.266999 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-19 00:26:57.267018 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-19 00:26:57.267037 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-19 00:26:57.267055 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-19 00:26:57.267113 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-19 00:26:57.267134 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-19 00:26:57.267152 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-19 00:26:57.267167 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:26:57.267178 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-19 00:26:57.267189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-19 00:26:57.267200 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:26:57.267210 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-19 00:26:57.267221 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-19 00:26:57.267232 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-19 00:26:57.267242 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 00:26:57.267253 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-19 00:26:57.267264 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-19 00:26:57.267275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 00:26:57.267285 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-19 00:26:57.267296 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-19 00:26:57.267306 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-19 00:26:57.267317 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-19 00:26:57.267328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 00:26:57.267338 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:26:57.267349 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-19 00:26:57.267359 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:26:57.267370 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-19 00:26:57.267380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:26:57.267391 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-19 00:26:57.267402 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-19 00:26:57.267412 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-19 00:26:57.267423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:26:57.267433 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 00:26:57.267444 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-19 00:26:57.267478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:26:57.267489 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:26:57.267500 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 00:26:57.267510 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-19 00:26:57.267521 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 00:26:57.267532 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 00:26:57.267542 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:26:57.267553 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 00:26:57.267563 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 00:26:57.267574 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:26:57.267584 | orchestrator | 2026-03-19 00:26:57.267595 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-19 00:26:57.267606 | orchestrator | 2026-03-19 00:26:57.267616 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-19 00:26:57.267627 | orchestrator | Thursday 19 March 2026 00:26:50 +0000 (0:00:00.470) 0:00:06.888 ******** 2026-03-19 00:26:57.267638 | orchestrator | ok: [testbed-manager] 2026-03-19 00:26:57.267648 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:26:57.267668 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:26:57.267678 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:26:57.267689 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:26:57.267699 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:26:57.267710 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:26:57.267720 | orchestrator | 2026-03-19 00:26:57.267731 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-19 00:26:57.267742 | orchestrator | Thursday 19 March 2026 00:26:51 +0000 (0:00:01.217) 0:00:08.105 ******** 2026-03-19 00:26:57.267753 | orchestrator | ok: [testbed-manager] 2026-03-19 00:26:57.267763 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:26:57.267774 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:26:57.267785 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:26:57.267795 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:26:57.267806 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:26:57.267816 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:26:57.267827 | orchestrator | 2026-03-19 00:26:57.267838 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-19 00:26:57.267848 | orchestrator | Thursday 19 March 2026 00:26:52 +0000 (0:00:01.317) 0:00:09.423 ******** 2026-03-19 00:26:57.267860 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:26:57.267892 | orchestrator | 2026-03-19 00:26:57.267904 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-19 00:26:57.267915 | orchestrator | Thursday 19 March 2026 00:26:53 +0000 (0:00:00.283) 0:00:09.707 ******** 2026-03-19 00:26:57.267925 | orchestrator | changed: [testbed-manager] 2026-03-19 00:26:57.267936 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:26:57.267947 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:26:57.267958 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:26:57.267968 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:26:57.267979 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:26:57.267989 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:26:57.268000 | orchestrator | 2026-03-19 00:26:57.268011 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-19 00:26:57.268021 | orchestrator | Thursday 19 March 2026 00:26:54 +0000 (0:00:01.550) 0:00:11.257 ******** 2026-03-19 00:26:57.268032 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:26:57.268044 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:26:57.268057 | orchestrator | 2026-03-19 00:26:57.268067 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-19 00:26:57.268098 | orchestrator | Thursday 19 March 2026 00:26:55 +0000 (0:00:00.250) 0:00:11.508 ******** 2026-03-19 00:26:57.268110 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:26:57.268120 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:26:57.268131 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:26:57.268146 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:26:57.268157 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:26:57.268167 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:26:57.268178 | orchestrator | 2026-03-19 00:26:57.268189 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-19 00:26:57.268200 | orchestrator | Thursday 19 March 2026 00:26:56 +0000 (0:00:01.033) 0:00:12.542 ******** 2026-03-19 00:26:57.268211 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:26:57.268221 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:26:57.268232 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:26:57.268242 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:26:57.268253 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:26:57.268264 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:26:57.268281 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:26:57.268291 | orchestrator | 2026-03-19 00:26:57.268302 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-19 00:26:57.268313 | orchestrator | Thursday 19 March 2026 00:26:56 +0000 (0:00:00.598) 0:00:13.140 ******** 2026-03-19 00:26:57.268324 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:26:57.268334 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:26:57.268345 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:26:57.268355 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:26:57.268366 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:26:57.268377 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:26:57.268387 | orchestrator | ok: [testbed-manager] 2026-03-19 00:26:57.268398 | orchestrator | 2026-03-19 00:26:57.268409 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-19 00:26:57.268421 | orchestrator | Thursday 19 March 2026 00:26:57 +0000 (0:00:00.436) 0:00:13.577 ******** 2026-03-19 00:26:57.268431 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:26:57.268442 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:26:57.268460 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:27:09.035145 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:27:09.035263 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:27:09.035280 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:27:09.035293 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:27:09.035304 | orchestrator | 2026-03-19 00:27:09.035317 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-19 00:27:09.035331 | orchestrator | Thursday 19 March 2026 00:26:57 +0000 (0:00:00.219) 0:00:13.796 ******** 2026-03-19 00:27:09.035344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:27:09.035377 | orchestrator | 2026-03-19 00:27:09.035389 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-19 00:27:09.035402 | orchestrator | Thursday 19 March 2026 00:26:57 +0000 (0:00:00.308) 0:00:14.105 ******** 2026-03-19 00:27:09.035415 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:27:09.035427 | orchestrator | 2026-03-19 00:27:09.035439 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-19 00:27:09.035450 | orchestrator | Thursday 19 March 2026 00:26:57 +0000 (0:00:00.282) 0:00:14.387 ******** 2026-03-19 00:27:09.035461 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:09.035472 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:09.035484 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:09.035494 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:09.035505 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:09.035516 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:09.035528 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:09.035540 | orchestrator | 2026-03-19 00:27:09.035553 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-19 00:27:09.035565 | orchestrator | Thursday 19 March 2026 00:26:59 +0000 (0:00:01.496) 0:00:15.884 ******** 2026-03-19 00:27:09.035578 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:27:09.035589 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:27:09.035601 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:27:09.035613 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:27:09.035625 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:27:09.035638 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:27:09.035651 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:27:09.035665 | orchestrator | 2026-03-19 00:27:09.035679 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-19 00:27:09.035776 | orchestrator | Thursday 19 March 2026 00:26:59 +0000 (0:00:00.206) 0:00:16.091 ******** 2026-03-19 00:27:09.035798 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:09.035819 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:09.035840 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:09.035861 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:09.035906 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:09.035927 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:09.035949 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:09.035970 | orchestrator | 2026-03-19 00:27:09.035992 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-19 00:27:09.036014 | orchestrator | Thursday 19 March 2026 00:27:00 +0000 (0:00:00.593) 0:00:16.684 ******** 2026-03-19 00:27:09.036026 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:27:09.036038 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:27:09.036049 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:27:09.036061 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:27:09.036073 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:27:09.036084 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:27:09.036096 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:27:09.036108 | orchestrator | 2026-03-19 00:27:09.036120 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-19 00:27:09.036133 | orchestrator | Thursday 19 March 2026 00:27:00 +0000 (0:00:00.224) 0:00:16.909 ******** 2026-03-19 00:27:09.036145 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:09.036167 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:27:09.036179 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:27:09.036191 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:27:09.036202 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:27:09.036214 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:27:09.036226 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:27:09.036238 | orchestrator | 2026-03-19 00:27:09.036249 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-19 00:27:09.036261 | orchestrator | Thursday 19 March 2026 00:27:01 +0000 (0:00:00.597) 0:00:17.506 ******** 2026-03-19 00:27:09.036273 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:09.036285 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:27:09.036297 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:27:09.036309 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:27:09.036320 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:27:09.036332 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:27:09.036344 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:27:09.036356 | orchestrator | 2026-03-19 00:27:09.036368 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-19 00:27:09.036380 | orchestrator | Thursday 19 March 2026 00:27:02 +0000 (0:00:01.142) 0:00:18.648 ******** 2026-03-19 00:27:09.036392 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:09.036404 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:09.036416 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:09.036428 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:09.036440 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:09.036452 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:09.036462 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:09.036472 | orchestrator | 2026-03-19 00:27:09.036483 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-19 00:27:09.036495 | orchestrator | Thursday 19 March 2026 00:27:03 +0000 (0:00:01.001) 0:00:19.649 ******** 2026-03-19 00:27:09.036543 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:27:09.036557 | orchestrator | 2026-03-19 00:27:09.036569 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-19 00:27:09.036581 | orchestrator | Thursday 19 March 2026 00:27:03 +0000 (0:00:00.297) 0:00:19.947 ******** 2026-03-19 00:27:09.036601 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:27:09.036613 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:27:09.036625 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:27:09.036638 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:27:09.036649 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:27:09.036661 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:27:09.036672 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:27:09.036684 | orchestrator | 2026-03-19 00:27:09.036696 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-19 00:27:09.036707 | orchestrator | Thursday 19 March 2026 00:27:04 +0000 (0:00:01.313) 0:00:21.260 ******** 2026-03-19 00:27:09.036719 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:09.036730 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:09.036742 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:09.036753 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:09.036764 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:09.036775 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:09.036787 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:09.036798 | orchestrator | 2026-03-19 00:27:09.036810 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-19 00:27:09.036821 | orchestrator | Thursday 19 March 2026 00:27:05 +0000 (0:00:00.226) 0:00:21.487 ******** 2026-03-19 00:27:09.036833 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:09.036845 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:09.036856 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:09.036868 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:09.036900 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:09.036911 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:09.036922 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:09.036934 | orchestrator | 2026-03-19 00:27:09.036946 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-19 00:27:09.036958 | orchestrator | Thursday 19 March 2026 00:27:05 +0000 (0:00:00.181) 0:00:21.669 ******** 2026-03-19 00:27:09.036970 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:09.036982 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:09.036993 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:09.037005 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:09.037016 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:09.037028 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:09.037039 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:09.037051 | orchestrator | 2026-03-19 00:27:09.037063 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-19 00:27:09.037075 | orchestrator | Thursday 19 March 2026 00:27:05 +0000 (0:00:00.171) 0:00:21.840 ******** 2026-03-19 00:27:09.037087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:27:09.037100 | orchestrator | 2026-03-19 00:27:09.037110 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-19 00:27:09.037122 | orchestrator | Thursday 19 March 2026 00:27:05 +0000 (0:00:00.235) 0:00:22.075 ******** 2026-03-19 00:27:09.037135 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:09.037147 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:09.037158 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:09.037170 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:09.037180 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:09.037191 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:09.037202 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:09.037214 | orchestrator | 2026-03-19 00:27:09.037225 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-19 00:27:09.037237 | orchestrator | Thursday 19 March 2026 00:27:06 +0000 (0:00:00.559) 0:00:22.635 ******** 2026-03-19 00:27:09.037248 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:27:09.037269 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:27:09.037281 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:27:09.037293 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:27:09.037305 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:27:09.037317 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:27:09.037329 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:27:09.037340 | orchestrator | 2026-03-19 00:27:09.037352 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-19 00:27:09.037363 | orchestrator | Thursday 19 March 2026 00:27:06 +0000 (0:00:00.170) 0:00:22.806 ******** 2026-03-19 00:27:09.037375 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:09.037387 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:09.037399 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:09.037410 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:09.037422 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:27:09.037434 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:27:09.037446 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:27:09.037457 | orchestrator | 2026-03-19 00:27:09.037467 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-19 00:27:09.037478 | orchestrator | Thursday 19 March 2026 00:27:07 +0000 (0:00:01.118) 0:00:23.925 ******** 2026-03-19 00:27:09.037491 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:09.037503 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:09.037515 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:09.037526 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:09.037537 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:09.037548 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:09.037560 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:09.037571 | orchestrator | 2026-03-19 00:27:09.037581 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-19 00:27:09.037592 | orchestrator | Thursday 19 March 2026 00:27:08 +0000 (0:00:00.533) 0:00:24.459 ******** 2026-03-19 00:27:09.037603 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:09.037613 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:09.037623 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:09.037635 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:27:09.037659 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:27:52.324863 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:27:52.326386 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:52.326509 | orchestrator | 2026-03-19 00:27:52.326538 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-19 00:27:52.326559 | orchestrator | Thursday 19 March 2026 00:27:09 +0000 (0:00:01.079) 0:00:25.538 ******** 2026-03-19 00:27:52.326577 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:52.326595 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:52.326628 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:52.326646 | orchestrator | changed: [testbed-manager] 2026-03-19 00:27:52.326664 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:27:52.326681 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:27:52.326698 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:27:52.326715 | orchestrator | 2026-03-19 00:27:52.326733 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-19 00:27:52.326750 | orchestrator | Thursday 19 March 2026 00:27:27 +0000 (0:00:18.023) 0:00:43.562 ******** 2026-03-19 00:27:52.326768 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:52.326786 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:52.326804 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:52.326822 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:52.326839 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:52.326856 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:52.326935 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:52.326954 | orchestrator | 2026-03-19 00:27:52.326972 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-19 00:27:52.326991 | orchestrator | Thursday 19 March 2026 00:27:27 +0000 (0:00:00.219) 0:00:43.781 ******** 2026-03-19 00:27:52.327009 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:52.327081 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:52.327101 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:52.327120 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:52.327136 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:52.327154 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:52.327172 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:52.327190 | orchestrator | 2026-03-19 00:27:52.327208 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-19 00:27:52.327225 | orchestrator | Thursday 19 March 2026 00:27:27 +0000 (0:00:00.204) 0:00:43.986 ******** 2026-03-19 00:27:52.327243 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:52.327260 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:52.327277 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:52.327294 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:52.327313 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:52.327330 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:52.327348 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:52.327366 | orchestrator | 2026-03-19 00:27:52.327384 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-19 00:27:52.327403 | orchestrator | Thursday 19 March 2026 00:27:27 +0000 (0:00:00.251) 0:00:44.238 ******** 2026-03-19 00:27:52.327425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:27:52.327446 | orchestrator | 2026-03-19 00:27:52.327494 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-19 00:27:52.327514 | orchestrator | Thursday 19 March 2026 00:27:28 +0000 (0:00:00.330) 0:00:44.568 ******** 2026-03-19 00:27:52.327532 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:52.327549 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:52.327567 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:52.327585 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:52.327621 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:52.327640 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:52.327660 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:52.327678 | orchestrator | 2026-03-19 00:27:52.327696 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-19 00:27:52.327715 | orchestrator | Thursday 19 March 2026 00:27:30 +0000 (0:00:02.069) 0:00:46.638 ******** 2026-03-19 00:27:52.327734 | orchestrator | changed: [testbed-manager] 2026-03-19 00:27:52.327753 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:27:52.327773 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:27:52.327794 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:27:52.327813 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:27:52.327832 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:27:52.327859 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:27:52.327915 | orchestrator | 2026-03-19 00:27:52.327934 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-19 00:27:52.327954 | orchestrator | Thursday 19 March 2026 00:27:31 +0000 (0:00:01.175) 0:00:47.813 ******** 2026-03-19 00:27:52.327972 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:52.327990 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:52.328007 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:52.328026 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:52.328045 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:52.328065 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:52.328083 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:52.328101 | orchestrator | 2026-03-19 00:27:52.328119 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-19 00:27:52.328136 | orchestrator | Thursday 19 March 2026 00:27:32 +0000 (0:00:00.889) 0:00:48.703 ******** 2026-03-19 00:27:52.328157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:27:52.328200 | orchestrator | 2026-03-19 00:27:52.328219 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-19 00:27:52.328240 | orchestrator | Thursday 19 March 2026 00:27:32 +0000 (0:00:00.342) 0:00:49.045 ******** 2026-03-19 00:27:52.328260 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:27:52.328278 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:27:52.328295 | orchestrator | changed: [testbed-manager] 2026-03-19 00:27:52.328315 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:27:52.328334 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:27:52.328352 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:27:52.328370 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:27:52.328387 | orchestrator | 2026-03-19 00:27:52.328447 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-19 00:27:52.328467 | orchestrator | Thursday 19 March 2026 00:27:33 +0000 (0:00:01.158) 0:00:50.204 ******** 2026-03-19 00:27:52.328486 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:27:52.328505 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:27:52.328524 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:27:52.328541 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:27:52.328560 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:27:52.328578 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:27:52.328599 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:27:52.328618 | orchestrator | 2026-03-19 00:27:52.328636 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-19 00:27:52.328656 | orchestrator | Thursday 19 March 2026 00:27:34 +0000 (0:00:00.228) 0:00:50.432 ******** 2026-03-19 00:27:52.328675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:27:52.328695 | orchestrator | 2026-03-19 00:27:52.328714 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-19 00:27:52.328734 | orchestrator | Thursday 19 March 2026 00:27:34 +0000 (0:00:00.314) 0:00:50.747 ******** 2026-03-19 00:27:52.328753 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:52.328771 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:52.328783 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:52.328794 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:52.328805 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:52.328817 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:52.328828 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:52.328838 | orchestrator | 2026-03-19 00:27:52.328849 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-19 00:27:52.328860 | orchestrator | Thursday 19 March 2026 00:27:36 +0000 (0:00:01.950) 0:00:52.697 ******** 2026-03-19 00:27:52.328901 | orchestrator | changed: [testbed-manager] 2026-03-19 00:27:52.328920 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:27:52.328934 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:27:52.328943 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:27:52.328953 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:27:52.328963 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:27:52.328972 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:27:52.328982 | orchestrator | 2026-03-19 00:27:52.328991 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-19 00:27:52.329001 | orchestrator | Thursday 19 March 2026 00:27:37 +0000 (0:00:01.196) 0:00:53.894 ******** 2026-03-19 00:27:52.329011 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:27:52.329021 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:27:52.329030 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:27:52.329040 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:27:52.329049 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:27:52.329059 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:27:52.329081 | orchestrator | changed: [testbed-manager] 2026-03-19 00:27:52.329091 | orchestrator | 2026-03-19 00:27:52.329100 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-19 00:27:52.329110 | orchestrator | Thursday 19 March 2026 00:27:49 +0000 (0:00:11.866) 0:01:05.760 ******** 2026-03-19 00:27:52.329120 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:52.329129 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:52.329139 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:52.329148 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:52.329157 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:52.329167 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:52.329177 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:52.329186 | orchestrator | 2026-03-19 00:27:52.329196 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-19 00:27:52.329205 | orchestrator | Thursday 19 March 2026 00:27:50 +0000 (0:00:01.347) 0:01:07.107 ******** 2026-03-19 00:27:52.329215 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:52.329224 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:52.329234 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:52.329243 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:52.329253 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:52.329263 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:52.329272 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:52.329282 | orchestrator | 2026-03-19 00:27:52.329299 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-19 00:27:52.329309 | orchestrator | Thursday 19 March 2026 00:27:51 +0000 (0:00:00.930) 0:01:08.038 ******** 2026-03-19 00:27:52.329319 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:52.329328 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:52.329338 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:52.329347 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:52.329357 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:52.329366 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:52.329376 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:52.329385 | orchestrator | 2026-03-19 00:27:52.329395 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-19 00:27:52.329404 | orchestrator | Thursday 19 March 2026 00:27:51 +0000 (0:00:00.212) 0:01:08.251 ******** 2026-03-19 00:27:52.329414 | orchestrator | ok: [testbed-manager] 2026-03-19 00:27:52.329423 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:27:52.329432 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:27:52.329442 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:27:52.329452 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:27:52.329461 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:27:52.329471 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:27:52.329480 | orchestrator | 2026-03-19 00:27:52.329490 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-19 00:27:52.329500 | orchestrator | Thursday 19 March 2026 00:27:52 +0000 (0:00:00.223) 0:01:08.475 ******** 2026-03-19 00:27:52.329510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:27:52.329521 | orchestrator | 2026-03-19 00:27:52.329541 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-19 00:30:01.432056 | orchestrator | Thursday 19 March 2026 00:27:52 +0000 (0:00:00.269) 0:01:08.744 ******** 2026-03-19 00:30:01.432153 | orchestrator | ok: [testbed-manager] 2026-03-19 00:30:01.432160 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:30:01.432165 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:30:01.432169 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:30:01.432173 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:30:01.432177 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:30:01.432181 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:30:01.432185 | orchestrator | 2026-03-19 00:30:01.432190 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-19 00:30:01.432214 | orchestrator | Thursday 19 March 2026 00:27:54 +0000 (0:00:01.941) 0:01:10.687 ******** 2026-03-19 00:30:01.432218 | orchestrator | changed: [testbed-manager] 2026-03-19 00:30:01.432223 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:30:01.432226 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:30:01.432230 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:30:01.432234 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:30:01.432238 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:30:01.432242 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:30:01.432245 | orchestrator | 2026-03-19 00:30:01.432249 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-19 00:30:01.432254 | orchestrator | Thursday 19 March 2026 00:27:55 +0000 (0:00:00.820) 0:01:11.507 ******** 2026-03-19 00:30:01.432258 | orchestrator | ok: [testbed-manager] 2026-03-19 00:30:01.432262 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:30:01.432266 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:30:01.432269 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:30:01.432273 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:30:01.432276 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:30:01.432280 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:30:01.432284 | orchestrator | 2026-03-19 00:30:01.432288 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-19 00:30:01.432291 | orchestrator | Thursday 19 March 2026 00:27:55 +0000 (0:00:00.254) 0:01:11.762 ******** 2026-03-19 00:30:01.432295 | orchestrator | ok: [testbed-manager] 2026-03-19 00:30:01.432299 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:30:01.432302 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:30:01.432306 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:30:01.432309 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:30:01.432313 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:30:01.432317 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:30:01.432320 | orchestrator | 2026-03-19 00:30:01.432324 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-19 00:30:01.432328 | orchestrator | Thursday 19 March 2026 00:27:56 +0000 (0:00:01.323) 0:01:13.086 ******** 2026-03-19 00:30:01.432331 | orchestrator | changed: [testbed-manager] 2026-03-19 00:30:01.432335 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:30:01.432339 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:30:01.432342 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:30:01.432346 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:30:01.432350 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:30:01.432353 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:30:01.432357 | orchestrator | 2026-03-19 00:30:01.432361 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-19 00:30:01.432364 | orchestrator | Thursday 19 March 2026 00:27:58 +0000 (0:00:02.165) 0:01:15.251 ******** 2026-03-19 00:30:01.432368 | orchestrator | ok: [testbed-manager] 2026-03-19 00:30:01.432372 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:30:01.432375 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:30:01.432379 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:30:01.432383 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:30:01.432387 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:30:01.432390 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:30:01.432394 | orchestrator | 2026-03-19 00:30:01.432398 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-19 00:30:01.432401 | orchestrator | Thursday 19 March 2026 00:28:02 +0000 (0:00:03.423) 0:01:18.675 ******** 2026-03-19 00:30:01.432405 | orchestrator | ok: [testbed-manager] 2026-03-19 00:30:01.432409 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:30:01.432412 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:30:01.432416 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:30:01.432419 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:30:01.432423 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:30:01.432427 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:30:01.432430 | orchestrator | 2026-03-19 00:30:01.432434 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-19 00:30:01.432456 | orchestrator | Thursday 19 March 2026 00:28:37 +0000 (0:00:35.336) 0:01:54.011 ******** 2026-03-19 00:30:01.432463 | orchestrator | changed: [testbed-manager] 2026-03-19 00:30:01.432469 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:30:01.432527 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:30:01.432534 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:30:01.432541 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:30:01.432548 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:30:01.432555 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:30:01.432561 | orchestrator | 2026-03-19 00:30:01.432567 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-19 00:30:01.432574 | orchestrator | Thursday 19 March 2026 00:29:47 +0000 (0:01:09.877) 0:03:03.889 ******** 2026-03-19 00:30:01.432580 | orchestrator | ok: [testbed-manager] 2026-03-19 00:30:01.432587 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:30:01.432593 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:30:01.432600 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:30:01.432607 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:30:01.432613 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:30:01.432620 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:30:01.432627 | orchestrator | 2026-03-19 00:30:01.432635 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-19 00:30:01.432639 | orchestrator | Thursday 19 March 2026 00:29:49 +0000 (0:00:01.937) 0:03:05.827 ******** 2026-03-19 00:30:01.432644 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:30:01.432648 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:30:01.432652 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:30:01.432657 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:30:01.432661 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:30:01.432665 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:30:01.432670 | orchestrator | changed: [testbed-manager] 2026-03-19 00:30:01.432675 | orchestrator | 2026-03-19 00:30:01.432679 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-19 00:30:01.432683 | orchestrator | Thursday 19 March 2026 00:30:00 +0000 (0:00:11.043) 0:03:16.870 ******** 2026-03-19 00:30:01.432708 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-19 00:30:01.432720 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-19 00:30:01.432726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-19 00:30:01.432732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-19 00:30:01.432742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-19 00:30:01.432746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-19 00:30:01.432754 | orchestrator | 2026-03-19 00:30:01.432758 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-19 00:30:01.432763 | orchestrator | Thursday 19 March 2026 00:30:00 +0000 (0:00:00.316) 0:03:17.186 ******** 2026-03-19 00:30:01.432767 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-19 00:30:01.432772 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:30:01.432777 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-19 00:30:01.432781 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:30:01.432786 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-19 00:30:01.432790 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:30:01.432799 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-19 00:30:01.432804 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:30:01.432808 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-19 00:30:01.432813 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-19 00:30:01.432817 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-19 00:30:01.432836 | orchestrator | 2026-03-19 00:30:01.432840 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-19 00:30:01.432844 | orchestrator | Thursday 19 March 2026 00:30:01 +0000 (0:00:00.607) 0:03:17.794 ******** 2026-03-19 00:30:01.432849 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-19 00:30:01.432854 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-19 00:30:01.432859 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-19 00:30:01.432863 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-19 00:30:01.432868 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-19 00:30:01.432875 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-19 00:30:07.632425 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-19 00:30:07.632527 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-19 00:30:07.632543 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-19 00:30:07.632554 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-19 00:30:07.632567 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:30:07.632580 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-19 00:30:07.632591 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-19 00:30:07.632602 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-19 00:30:07.632634 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-19 00:30:07.632647 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-19 00:30:07.632657 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-19 00:30:07.632668 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-19 00:30:07.632679 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-19 00:30:07.632690 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-19 00:30:07.632701 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-19 00:30:07.632711 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-19 00:30:07.632722 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-19 00:30:07.632733 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-19 00:30:07.632744 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-19 00:30:07.632754 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-19 00:30:07.632765 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-19 00:30:07.632776 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:30:07.632790 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-19 00:30:07.632809 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-19 00:30:07.632859 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-19 00:30:07.632877 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-19 00:30:07.632896 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:30:07.632915 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-19 00:30:07.632933 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-19 00:30:07.632970 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-19 00:30:07.632991 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-19 00:30:07.633010 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-19 00:30:07.633030 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-19 00:30:07.633050 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-19 00:30:07.633069 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-19 00:30:07.633088 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-19 00:30:07.633108 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-19 00:30:07.633128 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:30:07.633148 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-19 00:30:07.633169 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-19 00:30:07.633190 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-19 00:30:07.633225 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-19 00:30:07.633240 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-19 00:30:07.633271 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-19 00:30:07.633286 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-19 00:30:07.633299 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-19 00:30:07.633312 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-19 00:30:07.633323 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-19 00:30:07.633334 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-19 00:30:07.633345 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-19 00:30:07.633356 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-19 00:30:07.633367 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-19 00:30:07.633378 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-19 00:30:07.633389 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-19 00:30:07.633399 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-19 00:30:07.633410 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-19 00:30:07.633421 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-19 00:30:07.633432 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-19 00:30:07.633443 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-19 00:30:07.633454 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-19 00:30:07.633465 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-19 00:30:07.633476 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-19 00:30:07.633487 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-19 00:30:07.633498 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-19 00:30:07.633509 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-19 00:30:07.633520 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-19 00:30:07.633530 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-19 00:30:07.633541 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-19 00:30:07.633553 | orchestrator | 2026-03-19 00:30:07.633564 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-19 00:30:07.633575 | orchestrator | Thursday 19 March 2026 00:30:06 +0000 (0:00:05.228) 0:03:23.023 ******** 2026-03-19 00:30:07.633586 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-19 00:30:07.633597 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-19 00:30:07.633608 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-19 00:30:07.633626 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-19 00:30:07.633643 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-19 00:30:07.633654 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-19 00:30:07.633665 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-19 00:30:07.633676 | orchestrator | 2026-03-19 00:30:07.633687 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-19 00:30:07.633698 | orchestrator | Thursday 19 March 2026 00:30:07 +0000 (0:00:00.548) 0:03:23.571 ******** 2026-03-19 00:30:07.633709 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 00:30:07.633720 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:30:07.633731 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 00:30:07.633742 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 00:30:07.633753 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:30:07.633764 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 00:30:07.633775 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:30:07.633786 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:30:07.633797 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-19 00:30:07.633808 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-19 00:30:07.633856 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-19 00:30:19.314454 | orchestrator | 2026-03-19 00:30:19.314517 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-19 00:30:19.314523 | orchestrator | Thursday 19 March 2026 00:30:07 +0000 (0:00:00.514) 0:03:24.086 ******** 2026-03-19 00:30:19.314528 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 00:30:19.314533 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:30:19.314537 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 00:30:19.314541 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 00:30:19.314545 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:30:19.314549 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:30:19.314553 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 00:30:19.314557 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:30:19.314560 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-19 00:30:19.314564 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-19 00:30:19.314568 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-19 00:30:19.314572 | orchestrator | 2026-03-19 00:30:19.314576 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-19 00:30:19.314579 | orchestrator | Thursday 19 March 2026 00:30:08 +0000 (0:00:00.446) 0:03:24.532 ******** 2026-03-19 00:30:19.314583 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-19 00:30:19.314587 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:30:19.314591 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-19 00:30:19.314595 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-19 00:30:19.314598 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:30:19.314613 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-19 00:30:19.314617 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:30:19.314621 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:30:19.314625 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-19 00:30:19.314629 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-19 00:30:19.314632 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-19 00:30:19.314636 | orchestrator | 2026-03-19 00:30:19.314640 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-19 00:30:19.314644 | orchestrator | Thursday 19 March 2026 00:30:08 +0000 (0:00:00.556) 0:03:25.089 ******** 2026-03-19 00:30:19.314648 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:30:19.314652 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:30:19.314656 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:30:19.314659 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:30:19.314663 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:30:19.314667 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:30:19.314671 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:30:19.314674 | orchestrator | 2026-03-19 00:30:19.314678 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-19 00:30:19.314682 | orchestrator | Thursday 19 March 2026 00:30:08 +0000 (0:00:00.242) 0:03:25.331 ******** 2026-03-19 00:30:19.314686 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:30:19.314690 | orchestrator | ok: [testbed-manager] 2026-03-19 00:30:19.314694 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:30:19.314698 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:30:19.314701 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:30:19.314705 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:30:19.314709 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:30:19.314712 | orchestrator | 2026-03-19 00:30:19.314716 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-19 00:30:19.314720 | orchestrator | Thursday 19 March 2026 00:30:14 +0000 (0:00:05.148) 0:03:30.480 ******** 2026-03-19 00:30:19.314724 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-19 00:30:19.314728 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:30:19.314732 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-19 00:30:19.314735 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-19 00:30:19.314739 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:30:19.314743 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-19 00:30:19.314746 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:30:19.314750 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-19 00:30:19.314754 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:30:19.314758 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-19 00:30:19.314761 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:30:19.314765 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:30:19.314769 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-19 00:30:19.314772 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:30:19.314776 | orchestrator | 2026-03-19 00:30:19.314780 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-19 00:30:19.314784 | orchestrator | Thursday 19 March 2026 00:30:14 +0000 (0:00:00.256) 0:03:30.736 ******** 2026-03-19 00:30:19.314787 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-19 00:30:19.314791 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-19 00:30:19.314795 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-19 00:30:19.314858 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-19 00:30:19.314865 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-19 00:30:19.314869 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-19 00:30:19.314876 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-19 00:30:19.314880 | orchestrator | 2026-03-19 00:30:19.314884 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-19 00:30:19.314887 | orchestrator | Thursday 19 March 2026 00:30:15 +0000 (0:00:01.132) 0:03:31.869 ******** 2026-03-19 00:30:19.314892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:30:19.314897 | orchestrator | 2026-03-19 00:30:19.314901 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-19 00:30:19.314904 | orchestrator | Thursday 19 March 2026 00:30:15 +0000 (0:00:00.330) 0:03:32.200 ******** 2026-03-19 00:30:19.314908 | orchestrator | ok: [testbed-manager] 2026-03-19 00:30:19.314912 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:30:19.314915 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:30:19.314919 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:30:19.314923 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:30:19.314926 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:30:19.314930 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:30:19.314934 | orchestrator | 2026-03-19 00:30:19.314938 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-19 00:30:19.314941 | orchestrator | Thursday 19 March 2026 00:30:16 +0000 (0:00:01.213) 0:03:33.413 ******** 2026-03-19 00:30:19.314945 | orchestrator | ok: [testbed-manager] 2026-03-19 00:30:19.314948 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:30:19.314952 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:30:19.314956 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:30:19.314959 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:30:19.314963 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:30:19.314972 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:30:19.314976 | orchestrator | 2026-03-19 00:30:19.314980 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-19 00:30:19.314983 | orchestrator | Thursday 19 March 2026 00:30:17 +0000 (0:00:00.599) 0:03:34.013 ******** 2026-03-19 00:30:19.314987 | orchestrator | changed: [testbed-manager] 2026-03-19 00:30:19.314991 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:30:19.314995 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:30:19.314998 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:30:19.315002 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:30:19.315006 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:30:19.315009 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:30:19.315013 | orchestrator | 2026-03-19 00:30:19.315017 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-19 00:30:19.315021 | orchestrator | Thursday 19 March 2026 00:30:18 +0000 (0:00:00.652) 0:03:34.665 ******** 2026-03-19 00:30:19.315024 | orchestrator | ok: [testbed-manager] 2026-03-19 00:30:19.315028 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:30:19.315032 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:30:19.315035 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:30:19.315039 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:30:19.315043 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:30:19.315046 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:30:19.315050 | orchestrator | 2026-03-19 00:30:19.315054 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-19 00:30:19.315057 | orchestrator | Thursday 19 March 2026 00:30:18 +0000 (0:00:00.579) 0:03:35.245 ******** 2026-03-19 00:30:19.315064 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773878741.1798694, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 00:30:19.315072 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773878733.612885, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 00:30:19.315076 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773878745.874518, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 00:30:19.315089 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773878744.962962, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 00:30:24.645149 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773878760.0034962, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 00:30:24.645300 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773878774.9156508, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 00:30:24.645325 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773878758.5976324, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 00:30:24.645372 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 00:30:24.645431 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 00:30:24.645454 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 00:30:24.645475 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 00:30:24.645556 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 00:30:24.645633 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 00:30:24.645682 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 00:30:24.645696 | orchestrator | 2026-03-19 00:30:24.645760 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-19 00:30:24.645899 | orchestrator | Thursday 19 March 2026 00:30:19 +0000 (0:00:01.015) 0:03:36.260 ******** 2026-03-19 00:30:24.645940 | orchestrator | changed: [testbed-manager] 2026-03-19 00:30:24.645990 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:30:24.646001 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:30:24.646114 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:30:24.646133 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:30:24.646144 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:30:24.646154 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:30:24.646169 | orchestrator | 2026-03-19 00:30:24.646187 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-19 00:30:24.646206 | orchestrator | Thursday 19 March 2026 00:30:20 +0000 (0:00:01.090) 0:03:37.350 ******** 2026-03-19 00:30:24.646264 | orchestrator | changed: [testbed-manager] 2026-03-19 00:30:24.646277 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:30:24.646295 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:30:24.646321 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:30:24.646339 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:30:24.646357 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:30:24.646374 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:30:24.646391 | orchestrator | 2026-03-19 00:30:24.646408 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-19 00:30:24.646424 | orchestrator | Thursday 19 March 2026 00:30:22 +0000 (0:00:01.085) 0:03:38.436 ******** 2026-03-19 00:30:24.646440 | orchestrator | changed: [testbed-manager] 2026-03-19 00:30:24.646457 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:30:24.646477 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:30:24.646496 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:30:24.646513 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:30:24.646531 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:30:24.646549 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:30:24.646693 | orchestrator | 2026-03-19 00:30:24.646714 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-19 00:30:24.646726 | orchestrator | Thursday 19 March 2026 00:30:23 +0000 (0:00:01.134) 0:03:39.570 ******** 2026-03-19 00:30:24.646737 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:30:24.646748 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:30:24.646759 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:30:24.646769 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:30:24.646780 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:30:24.646790 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:30:24.646801 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:30:24.646839 | orchestrator | 2026-03-19 00:30:24.646850 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-19 00:30:24.646861 | orchestrator | Thursday 19 March 2026 00:30:23 +0000 (0:00:00.323) 0:03:39.894 ******** 2026-03-19 00:30:24.646888 | orchestrator | ok: [testbed-manager] 2026-03-19 00:30:24.646901 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:30:24.646911 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:30:24.646922 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:30:24.646932 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:30:24.646943 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:30:24.646953 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:30:24.646964 | orchestrator | 2026-03-19 00:30:24.646975 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-19 00:30:24.646985 | orchestrator | Thursday 19 March 2026 00:30:24 +0000 (0:00:00.792) 0:03:40.686 ******** 2026-03-19 00:30:24.646998 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:30:24.647011 | orchestrator | 2026-03-19 00:30:24.647022 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-19 00:30:24.647076 | orchestrator | Thursday 19 March 2026 00:30:24 +0000 (0:00:00.379) 0:03:41.066 ******** 2026-03-19 00:31:43.172765 | orchestrator | ok: [testbed-manager] 2026-03-19 00:31:43.172826 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:31:43.172832 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:31:43.172836 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:31:43.172851 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:31:43.172854 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:31:43.172858 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:31:43.172862 | orchestrator | 2026-03-19 00:31:43.172867 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-19 00:31:43.172872 | orchestrator | Thursday 19 March 2026 00:30:34 +0000 (0:00:09.656) 0:03:50.723 ******** 2026-03-19 00:31:43.172875 | orchestrator | ok: [testbed-manager] 2026-03-19 00:31:43.172879 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:31:43.172883 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:31:43.172887 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:31:43.172890 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:31:43.172894 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:31:43.172898 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:31:43.172901 | orchestrator | 2026-03-19 00:31:43.172905 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-19 00:31:43.172909 | orchestrator | Thursday 19 March 2026 00:30:35 +0000 (0:00:01.395) 0:03:52.118 ******** 2026-03-19 00:31:43.172913 | orchestrator | ok: [testbed-manager] 2026-03-19 00:31:43.172917 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:31:43.172921 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:31:43.172924 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:31:43.172928 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:31:43.172932 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:31:43.172935 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:31:43.172939 | orchestrator | 2026-03-19 00:31:43.172943 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-19 00:31:43.172947 | orchestrator | Thursday 19 March 2026 00:30:36 +0000 (0:00:01.038) 0:03:53.157 ******** 2026-03-19 00:31:43.172950 | orchestrator | ok: [testbed-manager] 2026-03-19 00:31:43.172954 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:31:43.172958 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:31:43.172961 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:31:43.172965 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:31:43.172968 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:31:43.172972 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:31:43.172976 | orchestrator | 2026-03-19 00:31:43.172979 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-19 00:31:43.172983 | orchestrator | Thursday 19 March 2026 00:30:36 +0000 (0:00:00.263) 0:03:53.420 ******** 2026-03-19 00:31:43.172987 | orchestrator | ok: [testbed-manager] 2026-03-19 00:31:43.172991 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:31:43.172994 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:31:43.172998 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:31:43.173002 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:31:43.173005 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:31:43.173009 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:31:43.173013 | orchestrator | 2026-03-19 00:31:43.173016 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-19 00:31:43.173020 | orchestrator | Thursday 19 March 2026 00:30:37 +0000 (0:00:00.283) 0:03:53.703 ******** 2026-03-19 00:31:43.173024 | orchestrator | ok: [testbed-manager] 2026-03-19 00:31:43.173028 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:31:43.173031 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:31:43.173035 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:31:43.173038 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:31:43.173042 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:31:43.173046 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:31:43.173049 | orchestrator | 2026-03-19 00:31:43.173053 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-19 00:31:43.173057 | orchestrator | Thursday 19 March 2026 00:30:37 +0000 (0:00:00.290) 0:03:53.993 ******** 2026-03-19 00:31:43.173061 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:31:43.173064 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:31:43.173068 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:31:43.173074 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:31:43.173078 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:31:43.173081 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:31:43.173085 | orchestrator | ok: [testbed-manager] 2026-03-19 00:31:43.173089 | orchestrator | 2026-03-19 00:31:43.173092 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-19 00:31:43.173096 | orchestrator | Thursday 19 March 2026 00:30:42 +0000 (0:00:04.699) 0:03:58.693 ******** 2026-03-19 00:31:43.173101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:31:43.173106 | orchestrator | 2026-03-19 00:31:43.173109 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-19 00:31:43.173113 | orchestrator | Thursday 19 March 2026 00:30:42 +0000 (0:00:00.369) 0:03:59.062 ******** 2026-03-19 00:31:43.173117 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-19 00:31:43.173121 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-19 00:31:43.173124 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-19 00:31:43.173128 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:31:43.173132 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-19 00:31:43.173135 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-19 00:31:43.173139 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-19 00:31:43.173143 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:31:43.173146 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:31:43.173150 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-19 00:31:43.173154 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-19 00:31:43.173158 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-19 00:31:43.173161 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:31:43.173165 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-19 00:31:43.173169 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-19 00:31:43.173172 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-19 00:31:43.173184 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:31:43.173188 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:31:43.173191 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-19 00:31:43.173195 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-19 00:31:43.173199 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:31:43.173203 | orchestrator | 2026-03-19 00:31:43.173206 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-19 00:31:43.173210 | orchestrator | Thursday 19 March 2026 00:30:42 +0000 (0:00:00.324) 0:03:59.386 ******** 2026-03-19 00:31:43.173214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:31:43.173218 | orchestrator | 2026-03-19 00:31:43.173222 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-19 00:31:43.173225 | orchestrator | Thursday 19 March 2026 00:30:43 +0000 (0:00:00.469) 0:03:59.856 ******** 2026-03-19 00:31:43.173229 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-19 00:31:43.173233 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:31:43.173236 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-19 00:31:43.173240 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-19 00:31:43.173248 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:31:43.173252 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:31:43.173256 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-19 00:31:43.173263 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-19 00:31:43.173267 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:31:43.173270 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:31:43.173274 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-19 00:31:43.173278 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:31:43.173282 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-19 00:31:43.173285 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:31:43.173289 | orchestrator | 2026-03-19 00:31:43.173293 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-19 00:31:43.173296 | orchestrator | Thursday 19 March 2026 00:30:43 +0000 (0:00:00.331) 0:04:00.187 ******** 2026-03-19 00:31:43.173300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:31:43.173304 | orchestrator | 2026-03-19 00:31:43.173308 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-19 00:31:43.173311 | orchestrator | Thursday 19 March 2026 00:30:44 +0000 (0:00:00.387) 0:04:00.575 ******** 2026-03-19 00:31:43.173316 | orchestrator | changed: [testbed-manager] 2026-03-19 00:31:43.173320 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:31:43.173324 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:31:43.173328 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:31:43.173331 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:31:43.173335 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:31:43.173339 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:31:43.173344 | orchestrator | 2026-03-19 00:31:43.173348 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-19 00:31:43.173352 | orchestrator | Thursday 19 March 2026 00:31:17 +0000 (0:00:33.486) 0:04:34.061 ******** 2026-03-19 00:31:43.173356 | orchestrator | changed: [testbed-manager] 2026-03-19 00:31:43.173360 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:31:43.173365 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:31:43.173369 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:31:43.173373 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:31:43.173377 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:31:43.173381 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:31:43.173386 | orchestrator | 2026-03-19 00:31:43.173390 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-19 00:31:43.173394 | orchestrator | Thursday 19 March 2026 00:31:26 +0000 (0:00:08.865) 0:04:42.926 ******** 2026-03-19 00:31:43.173398 | orchestrator | changed: [testbed-manager] 2026-03-19 00:31:43.173402 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:31:43.173407 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:31:43.173411 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:31:43.173415 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:31:43.173419 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:31:43.173423 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:31:43.173428 | orchestrator | 2026-03-19 00:31:43.173432 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-19 00:31:43.173436 | orchestrator | Thursday 19 March 2026 00:31:35 +0000 (0:00:08.607) 0:04:51.534 ******** 2026-03-19 00:31:43.173440 | orchestrator | ok: [testbed-manager] 2026-03-19 00:31:43.173445 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:31:43.173449 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:31:43.173453 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:31:43.173457 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:31:43.173461 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:31:43.173465 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:31:43.173469 | orchestrator | 2026-03-19 00:31:43.173474 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-19 00:31:43.173480 | orchestrator | Thursday 19 March 2026 00:31:36 +0000 (0:00:01.877) 0:04:53.411 ******** 2026-03-19 00:31:43.173484 | orchestrator | changed: [testbed-manager] 2026-03-19 00:31:43.173488 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:31:43.173492 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:31:43.173497 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:31:43.173501 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:31:43.173505 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:31:43.173509 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:31:43.173513 | orchestrator | 2026-03-19 00:31:43.173520 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-19 00:31:54.271081 | orchestrator | Thursday 19 March 2026 00:31:43 +0000 (0:00:06.181) 0:04:59.592 ******** 2026-03-19 00:31:54.271208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:31:54.271227 | orchestrator | 2026-03-19 00:31:54.271240 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-19 00:31:54.271252 | orchestrator | Thursday 19 March 2026 00:31:43 +0000 (0:00:00.394) 0:04:59.986 ******** 2026-03-19 00:31:54.271263 | orchestrator | changed: [testbed-manager] 2026-03-19 00:31:54.271275 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:31:54.271286 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:31:54.271297 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:31:54.271307 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:31:54.271318 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:31:54.271328 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:31:54.271339 | orchestrator | 2026-03-19 00:31:54.271350 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-19 00:31:54.271360 | orchestrator | Thursday 19 March 2026 00:31:44 +0000 (0:00:00.704) 0:05:00.691 ******** 2026-03-19 00:31:54.271371 | orchestrator | ok: [testbed-manager] 2026-03-19 00:31:54.271383 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:31:54.271393 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:31:54.271404 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:31:54.271414 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:31:54.271425 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:31:54.271435 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:31:54.271446 | orchestrator | 2026-03-19 00:31:54.271457 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-19 00:31:54.271467 | orchestrator | Thursday 19 March 2026 00:31:46 +0000 (0:00:01.901) 0:05:02.592 ******** 2026-03-19 00:31:54.271478 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:31:54.271488 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:31:54.271499 | orchestrator | changed: [testbed-manager] 2026-03-19 00:31:54.271510 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:31:54.271520 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:31:54.271531 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:31:54.271542 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:31:54.271552 | orchestrator | 2026-03-19 00:31:54.271563 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-19 00:31:54.271574 | orchestrator | Thursday 19 March 2026 00:31:46 +0000 (0:00:00.806) 0:05:03.399 ******** 2026-03-19 00:31:54.271585 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:31:54.271595 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:31:54.271606 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:31:54.271619 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:31:54.271631 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:31:54.271644 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:31:54.271656 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:31:54.271669 | orchestrator | 2026-03-19 00:31:54.271682 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-19 00:31:54.271806 | orchestrator | Thursday 19 March 2026 00:31:47 +0000 (0:00:00.276) 0:05:03.676 ******** 2026-03-19 00:31:54.271848 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:31:54.271862 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:31:54.271872 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:31:54.271883 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:31:54.271894 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:31:54.271904 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:31:54.271915 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:31:54.271925 | orchestrator | 2026-03-19 00:31:54.271936 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-19 00:31:54.271946 | orchestrator | Thursday 19 March 2026 00:31:47 +0000 (0:00:00.364) 0:05:04.040 ******** 2026-03-19 00:31:54.271957 | orchestrator | ok: [testbed-manager] 2026-03-19 00:31:54.271968 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:31:54.271982 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:31:54.271999 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:31:54.272015 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:31:54.272030 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:31:54.272046 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:31:54.272062 | orchestrator | 2026-03-19 00:31:54.272077 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-19 00:31:54.272094 | orchestrator | Thursday 19 March 2026 00:31:47 +0000 (0:00:00.366) 0:05:04.407 ******** 2026-03-19 00:31:54.272110 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:31:54.272126 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:31:54.272144 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:31:54.272161 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:31:54.272178 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:31:54.272199 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:31:54.272216 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:31:54.272234 | orchestrator | 2026-03-19 00:31:54.272253 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-19 00:31:54.272272 | orchestrator | Thursday 19 March 2026 00:31:48 +0000 (0:00:00.259) 0:05:04.666 ******** 2026-03-19 00:31:54.272291 | orchestrator | ok: [testbed-manager] 2026-03-19 00:31:54.272308 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:31:54.272326 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:31:54.272345 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:31:54.272364 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:31:54.272382 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:31:54.272401 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:31:54.272419 | orchestrator | 2026-03-19 00:31:54.272437 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-19 00:31:54.272455 | orchestrator | Thursday 19 March 2026 00:31:48 +0000 (0:00:00.296) 0:05:04.963 ******** 2026-03-19 00:31:54.272475 | orchestrator | ok: [testbed-manager] =>  2026-03-19 00:31:54.272494 | orchestrator |  docker_version: 5:27.5.1 2026-03-19 00:31:54.272512 | orchestrator | ok: [testbed-node-0] =>  2026-03-19 00:31:54.272530 | orchestrator |  docker_version: 5:27.5.1 2026-03-19 00:31:54.272547 | orchestrator | ok: [testbed-node-1] =>  2026-03-19 00:31:54.272567 | orchestrator |  docker_version: 5:27.5.1 2026-03-19 00:31:54.272585 | orchestrator | ok: [testbed-node-2] =>  2026-03-19 00:31:54.272604 | orchestrator |  docker_version: 5:27.5.1 2026-03-19 00:31:54.272648 | orchestrator | ok: [testbed-node-3] =>  2026-03-19 00:31:54.272668 | orchestrator |  docker_version: 5:27.5.1 2026-03-19 00:31:54.272687 | orchestrator | ok: [testbed-node-4] =>  2026-03-19 00:31:54.272705 | orchestrator |  docker_version: 5:27.5.1 2026-03-19 00:31:54.272755 | orchestrator | ok: [testbed-node-5] =>  2026-03-19 00:31:54.272774 | orchestrator |  docker_version: 5:27.5.1 2026-03-19 00:31:54.272792 | orchestrator | 2026-03-19 00:31:54.272807 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-19 00:31:54.272818 | orchestrator | Thursday 19 March 2026 00:31:48 +0000 (0:00:00.269) 0:05:05.233 ******** 2026-03-19 00:31:54.272828 | orchestrator | ok: [testbed-manager] =>  2026-03-19 00:31:54.272853 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-19 00:31:54.272864 | orchestrator | ok: [testbed-node-0] =>  2026-03-19 00:31:54.272875 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-19 00:31:54.272886 | orchestrator | ok: [testbed-node-1] =>  2026-03-19 00:31:54.272896 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-19 00:31:54.272907 | orchestrator | ok: [testbed-node-2] =>  2026-03-19 00:31:54.272917 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-19 00:31:54.272927 | orchestrator | ok: [testbed-node-3] =>  2026-03-19 00:31:54.272938 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-19 00:31:54.272949 | orchestrator | ok: [testbed-node-4] =>  2026-03-19 00:31:54.272959 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-19 00:31:54.272970 | orchestrator | ok: [testbed-node-5] =>  2026-03-19 00:31:54.272980 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-19 00:31:54.273052 | orchestrator | 2026-03-19 00:31:54.273066 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-19 00:31:54.273077 | orchestrator | Thursday 19 March 2026 00:31:49 +0000 (0:00:00.270) 0:05:05.504 ******** 2026-03-19 00:31:54.273088 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:31:54.273098 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:31:54.273109 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:31:54.273119 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:31:54.273130 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:31:54.273140 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:31:54.273151 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:31:54.273162 | orchestrator | 2026-03-19 00:31:54.273172 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-19 00:31:54.273183 | orchestrator | Thursday 19 March 2026 00:31:49 +0000 (0:00:00.259) 0:05:05.763 ******** 2026-03-19 00:31:54.273194 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:31:54.273205 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:31:54.273215 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:31:54.273225 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:31:54.273236 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:31:54.273246 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:31:54.273257 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:31:54.273267 | orchestrator | 2026-03-19 00:31:54.273278 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-19 00:31:54.273289 | orchestrator | Thursday 19 March 2026 00:31:49 +0000 (0:00:00.281) 0:05:06.045 ******** 2026-03-19 00:31:54.273313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:31:54.273326 | orchestrator | 2026-03-19 00:31:54.273337 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-19 00:31:54.273348 | orchestrator | Thursday 19 March 2026 00:31:50 +0000 (0:00:00.424) 0:05:06.470 ******** 2026-03-19 00:31:54.273359 | orchestrator | ok: [testbed-manager] 2026-03-19 00:31:54.273370 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:31:54.273380 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:31:54.273391 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:31:54.273402 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:31:54.273412 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:31:54.273423 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:31:54.273433 | orchestrator | 2026-03-19 00:31:54.273444 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-19 00:31:54.273455 | orchestrator | Thursday 19 March 2026 00:31:50 +0000 (0:00:00.890) 0:05:07.361 ******** 2026-03-19 00:31:54.273466 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:31:54.273476 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:31:54.273506 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:31:54.273518 | orchestrator | ok: [testbed-manager] 2026-03-19 00:31:54.273529 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:31:54.273548 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:31:54.273558 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:31:54.273569 | orchestrator | 2026-03-19 00:31:54.273580 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-19 00:31:54.273592 | orchestrator | Thursday 19 March 2026 00:31:53 +0000 (0:00:02.997) 0:05:10.358 ******** 2026-03-19 00:31:54.273604 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-19 00:31:54.273615 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-19 00:31:54.273626 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-19 00:31:54.273636 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-19 00:31:54.273647 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-19 00:31:54.273658 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:31:54.273669 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-19 00:31:54.273679 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-19 00:31:54.273690 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-19 00:31:54.273700 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-19 00:31:54.273711 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:31:54.273757 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-19 00:31:54.273772 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-19 00:31:54.273783 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-19 00:31:54.273793 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:31:54.273804 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-19 00:31:54.273832 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-19 00:32:58.836015 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-19 00:32:58.836195 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:32:58.836228 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-19 00:32:58.836314 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-19 00:32:58.836340 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-19 00:32:58.836360 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:32:58.836381 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:32:58.836401 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-19 00:32:58.836421 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-19 00:32:58.836439 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-19 00:32:58.836459 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:32:58.836481 | orchestrator | 2026-03-19 00:32:58.836505 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-19 00:32:58.836527 | orchestrator | Thursday 19 March 2026 00:31:54 +0000 (0:00:00.539) 0:05:10.898 ******** 2026-03-19 00:32:58.836546 | orchestrator | ok: [testbed-manager] 2026-03-19 00:32:58.836567 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:32:58.836586 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:32:58.836605 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:32:58.836624 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:32:58.836645 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:32:58.836694 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:32:58.836716 | orchestrator | 2026-03-19 00:32:58.836735 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-19 00:32:58.836756 | orchestrator | Thursday 19 March 2026 00:32:02 +0000 (0:00:07.637) 0:05:18.535 ******** 2026-03-19 00:32:58.836777 | orchestrator | ok: [testbed-manager] 2026-03-19 00:32:58.836796 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:32:58.836814 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:32:58.836835 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:32:58.836855 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:32:58.836875 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:32:58.836936 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:32:58.836957 | orchestrator | 2026-03-19 00:32:58.836976 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-19 00:32:58.836995 | orchestrator | Thursday 19 March 2026 00:32:03 +0000 (0:00:01.107) 0:05:19.643 ******** 2026-03-19 00:32:58.837015 | orchestrator | ok: [testbed-manager] 2026-03-19 00:32:58.837036 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:32:58.837056 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:32:58.837075 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:32:58.837094 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:32:58.837112 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:32:58.837132 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:32:58.837152 | orchestrator | 2026-03-19 00:32:58.837172 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-19 00:32:58.837192 | orchestrator | Thursday 19 March 2026 00:32:13 +0000 (0:00:09.806) 0:05:29.449 ******** 2026-03-19 00:32:58.837210 | orchestrator | changed: [testbed-manager] 2026-03-19 00:32:58.837229 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:32:58.837270 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:32:58.837291 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:32:58.837311 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:32:58.837330 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:32:58.837349 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:32:58.837369 | orchestrator | 2026-03-19 00:32:58.837389 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-19 00:32:58.837408 | orchestrator | Thursday 19 March 2026 00:32:16 +0000 (0:00:03.328) 0:05:32.778 ******** 2026-03-19 00:32:58.837427 | orchestrator | ok: [testbed-manager] 2026-03-19 00:32:58.837447 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:32:58.837465 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:32:58.837485 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:32:58.837504 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:32:58.837523 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:32:58.837543 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:32:58.837562 | orchestrator | 2026-03-19 00:32:58.837582 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-19 00:32:58.837601 | orchestrator | Thursday 19 March 2026 00:32:17 +0000 (0:00:01.298) 0:05:34.077 ******** 2026-03-19 00:32:58.837621 | orchestrator | ok: [testbed-manager] 2026-03-19 00:32:58.837641 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:32:58.837735 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:32:58.837763 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:32:58.837784 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:32:58.837800 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:32:58.837818 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:32:58.837837 | orchestrator | 2026-03-19 00:32:58.837854 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-19 00:32:58.837873 | orchestrator | Thursday 19 March 2026 00:32:19 +0000 (0:00:01.417) 0:05:35.494 ******** 2026-03-19 00:32:58.837891 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:32:58.837909 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:32:58.837927 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:32:58.837946 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:32:58.837965 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:32:58.837983 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:32:58.838001 | orchestrator | changed: [testbed-manager] 2026-03-19 00:32:58.838095 | orchestrator | 2026-03-19 00:32:58.838119 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-19 00:32:58.838141 | orchestrator | Thursday 19 March 2026 00:32:19 +0000 (0:00:00.552) 0:05:36.047 ******** 2026-03-19 00:32:58.838160 | orchestrator | ok: [testbed-manager] 2026-03-19 00:32:58.838178 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:32:58.838198 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:32:58.838235 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:32:58.838254 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:32:58.838271 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:32:58.838287 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:32:58.838302 | orchestrator | 2026-03-19 00:32:58.838320 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-19 00:32:58.838370 | orchestrator | Thursday 19 March 2026 00:32:29 +0000 (0:00:09.990) 0:05:46.038 ******** 2026-03-19 00:32:58.838388 | orchestrator | changed: [testbed-manager] 2026-03-19 00:32:58.838404 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:32:58.838460 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:32:58.838477 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:32:58.838495 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:32:58.838513 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:32:58.838531 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:32:58.838546 | orchestrator | 2026-03-19 00:32:58.838562 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-19 00:32:58.838578 | orchestrator | Thursday 19 March 2026 00:32:30 +0000 (0:00:01.140) 0:05:47.178 ******** 2026-03-19 00:32:58.838593 | orchestrator | ok: [testbed-manager] 2026-03-19 00:32:58.838611 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:32:58.838626 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:32:58.838643 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:32:58.838718 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:32:58.838739 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:32:58.838756 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:32:58.838773 | orchestrator | 2026-03-19 00:32:58.838791 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-19 00:32:58.838806 | orchestrator | Thursday 19 March 2026 00:32:40 +0000 (0:00:09.690) 0:05:56.869 ******** 2026-03-19 00:32:58.838822 | orchestrator | ok: [testbed-manager] 2026-03-19 00:32:58.838839 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:32:58.838857 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:32:58.838874 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:32:58.838891 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:32:58.838908 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:32:58.838924 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:32:58.838941 | orchestrator | 2026-03-19 00:32:58.838958 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-19 00:32:58.838974 | orchestrator | Thursday 19 March 2026 00:32:52 +0000 (0:00:11.669) 0:06:08.538 ******** 2026-03-19 00:32:58.838989 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-19 00:32:58.839005 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-19 00:32:58.839021 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-19 00:32:58.839036 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-19 00:32:58.839052 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-19 00:32:58.839069 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-19 00:32:58.839086 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-19 00:32:58.839103 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-19 00:32:58.839120 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-19 00:32:58.839136 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-19 00:32:58.839150 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-19 00:32:58.839166 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-19 00:32:58.839181 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-19 00:32:58.839197 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-19 00:32:58.839212 | orchestrator | 2026-03-19 00:32:58.839228 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-19 00:32:58.839242 | orchestrator | Thursday 19 March 2026 00:32:53 +0000 (0:00:01.192) 0:06:09.730 ******** 2026-03-19 00:32:58.839275 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:32:58.839293 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:32:58.839310 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:32:58.839327 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:32:58.839342 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:32:58.839357 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:32:58.839372 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:32:58.839388 | orchestrator | 2026-03-19 00:32:58.839403 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-19 00:32:58.839419 | orchestrator | Thursday 19 March 2026 00:32:53 +0000 (0:00:00.618) 0:06:10.349 ******** 2026-03-19 00:32:58.839434 | orchestrator | ok: [testbed-manager] 2026-03-19 00:32:58.839450 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:32:58.839466 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:32:58.839483 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:32:58.839499 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:32:58.839515 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:32:58.839532 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:32:58.839550 | orchestrator | 2026-03-19 00:32:58.839567 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-19 00:32:58.839586 | orchestrator | Thursday 19 March 2026 00:32:58 +0000 (0:00:04.165) 0:06:14.514 ******** 2026-03-19 00:32:58.839603 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:32:58.839620 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:32:58.839637 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:32:58.839654 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:32:58.839696 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:32:58.839713 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:32:58.839731 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:32:58.839748 | orchestrator | 2026-03-19 00:32:58.839827 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-19 00:32:58.839850 | orchestrator | Thursday 19 March 2026 00:32:58 +0000 (0:00:00.492) 0:06:15.007 ******** 2026-03-19 00:32:58.839867 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-19 00:32:58.839885 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-19 00:32:58.839902 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:32:58.839920 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-19 00:32:58.839937 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-19 00:32:58.839953 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:32:58.839971 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-19 00:32:58.839989 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-19 00:32:58.840006 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:32:58.840086 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-19 00:33:17.903971 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-19 00:33:17.904079 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:33:17.904092 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-19 00:33:17.904102 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-19 00:33:17.904111 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:33:17.904120 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-19 00:33:17.904129 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-19 00:33:17.904139 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:33:17.904148 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-19 00:33:17.904157 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-19 00:33:17.904165 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:33:17.904174 | orchestrator | 2026-03-19 00:33:17.904185 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-19 00:33:17.904222 | orchestrator | Thursday 19 March 2026 00:32:59 +0000 (0:00:00.522) 0:06:15.530 ******** 2026-03-19 00:33:17.904232 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:33:17.904241 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:33:17.904249 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:33:17.904258 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:33:17.904266 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:33:17.904275 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:33:17.904283 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:33:17.904292 | orchestrator | 2026-03-19 00:33:17.904301 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-19 00:33:17.904310 | orchestrator | Thursday 19 March 2026 00:32:59 +0000 (0:00:00.454) 0:06:15.984 ******** 2026-03-19 00:33:17.904319 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:33:17.904327 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:33:17.904336 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:33:17.904344 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:33:17.904353 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:33:17.904362 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:33:17.904370 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:33:17.904379 | orchestrator | 2026-03-19 00:33:17.904388 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-19 00:33:17.904397 | orchestrator | Thursday 19 March 2026 00:33:00 +0000 (0:00:00.601) 0:06:16.586 ******** 2026-03-19 00:33:17.904405 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:33:17.904414 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:33:17.904423 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:33:17.904431 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:33:17.904440 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:33:17.904448 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:33:17.904457 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:33:17.904465 | orchestrator | 2026-03-19 00:33:17.904474 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-19 00:33:17.904498 | orchestrator | Thursday 19 March 2026 00:33:00 +0000 (0:00:00.528) 0:06:17.114 ******** 2026-03-19 00:33:17.904507 | orchestrator | ok: [testbed-manager] 2026-03-19 00:33:17.904518 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:33:17.904528 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:33:17.904540 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:33:17.904555 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:33:17.904569 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:33:17.904583 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:33:17.904597 | orchestrator | 2026-03-19 00:33:17.904612 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-19 00:33:17.904628 | orchestrator | Thursday 19 March 2026 00:33:02 +0000 (0:00:01.847) 0:06:18.962 ******** 2026-03-19 00:33:17.904648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:33:17.904704 | orchestrator | 2026-03-19 00:33:17.904717 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-19 00:33:17.904727 | orchestrator | Thursday 19 March 2026 00:33:03 +0000 (0:00:00.790) 0:06:19.752 ******** 2026-03-19 00:33:17.904737 | orchestrator | ok: [testbed-manager] 2026-03-19 00:33:17.904747 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:33:17.904756 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:33:17.904767 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:33:17.904777 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:33:17.904787 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:33:17.904796 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:33:17.904805 | orchestrator | 2026-03-19 00:33:17.904814 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-19 00:33:17.904832 | orchestrator | Thursday 19 March 2026 00:33:04 +0000 (0:00:01.031) 0:06:20.784 ******** 2026-03-19 00:33:17.904840 | orchestrator | ok: [testbed-manager] 2026-03-19 00:33:17.904849 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:33:17.904858 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:33:17.904866 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:33:17.904875 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:33:17.904883 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:33:17.904891 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:33:17.904900 | orchestrator | 2026-03-19 00:33:17.904909 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-19 00:33:17.904917 | orchestrator | Thursday 19 March 2026 00:33:05 +0000 (0:00:00.885) 0:06:21.669 ******** 2026-03-19 00:33:17.904926 | orchestrator | ok: [testbed-manager] 2026-03-19 00:33:17.904934 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:33:17.904943 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:33:17.904951 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:33:17.904960 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:33:17.904968 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:33:17.904977 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:33:17.904985 | orchestrator | 2026-03-19 00:33:17.904994 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-19 00:33:17.905019 | orchestrator | Thursday 19 March 2026 00:33:06 +0000 (0:00:01.294) 0:06:22.964 ******** 2026-03-19 00:33:17.905029 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:33:17.905037 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:33:17.905046 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:33:17.905054 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:33:17.905063 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:33:17.905071 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:33:17.905080 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:33:17.905088 | orchestrator | 2026-03-19 00:33:17.905097 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-19 00:33:17.905106 | orchestrator | Thursday 19 March 2026 00:33:07 +0000 (0:00:01.431) 0:06:24.395 ******** 2026-03-19 00:33:17.905114 | orchestrator | ok: [testbed-manager] 2026-03-19 00:33:17.905123 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:33:17.905131 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:33:17.905140 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:33:17.905148 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:33:17.905157 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:33:17.905166 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:33:17.905174 | orchestrator | 2026-03-19 00:33:17.905183 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-19 00:33:17.905191 | orchestrator | Thursday 19 March 2026 00:33:09 +0000 (0:00:01.312) 0:06:25.708 ******** 2026-03-19 00:33:17.905200 | orchestrator | changed: [testbed-manager] 2026-03-19 00:33:17.905209 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:33:17.905217 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:33:17.905226 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:33:17.905234 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:33:17.905242 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:33:17.905251 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:33:17.905259 | orchestrator | 2026-03-19 00:33:17.905268 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-19 00:33:17.905277 | orchestrator | Thursday 19 March 2026 00:33:11 +0000 (0:00:01.778) 0:06:27.486 ******** 2026-03-19 00:33:17.905286 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:33:17.905294 | orchestrator | 2026-03-19 00:33:17.905303 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-19 00:33:17.905312 | orchestrator | Thursday 19 March 2026 00:33:11 +0000 (0:00:00.825) 0:06:28.312 ******** 2026-03-19 00:33:17.905332 | orchestrator | ok: [testbed-manager] 2026-03-19 00:33:17.905340 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:33:17.905349 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:33:17.905358 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:33:17.905366 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:33:17.905375 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:33:17.905383 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:33:17.905392 | orchestrator | 2026-03-19 00:33:17.905400 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-19 00:33:17.905409 | orchestrator | Thursday 19 March 2026 00:33:13 +0000 (0:00:01.362) 0:06:29.674 ******** 2026-03-19 00:33:17.905418 | orchestrator | ok: [testbed-manager] 2026-03-19 00:33:17.905427 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:33:17.905435 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:33:17.905443 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:33:17.905452 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:33:17.905461 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:33:17.905469 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:33:17.905478 | orchestrator | 2026-03-19 00:33:17.905487 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-19 00:33:17.905495 | orchestrator | Thursday 19 March 2026 00:33:14 +0000 (0:00:01.292) 0:06:30.966 ******** 2026-03-19 00:33:17.905504 | orchestrator | ok: [testbed-manager] 2026-03-19 00:33:17.905513 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:33:17.905521 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:33:17.905530 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:33:17.905541 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:33:17.905556 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:33:17.905572 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:33:17.905587 | orchestrator | 2026-03-19 00:33:17.905601 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-19 00:33:17.905616 | orchestrator | Thursday 19 March 2026 00:33:15 +0000 (0:00:01.136) 0:06:32.103 ******** 2026-03-19 00:33:17.905630 | orchestrator | ok: [testbed-manager] 2026-03-19 00:33:17.905643 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:33:17.905656 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:33:17.905690 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:33:17.905705 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:33:17.905720 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:33:17.905735 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:33:17.905749 | orchestrator | 2026-03-19 00:33:17.905764 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-19 00:33:17.905773 | orchestrator | Thursday 19 March 2026 00:33:16 +0000 (0:00:01.120) 0:06:33.223 ******** 2026-03-19 00:33:17.905782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:33:17.905791 | orchestrator | 2026-03-19 00:33:17.905800 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-19 00:33:17.905808 | orchestrator | Thursday 19 March 2026 00:33:17 +0000 (0:00:00.839) 0:06:34.062 ******** 2026-03-19 00:33:17.905817 | orchestrator | 2026-03-19 00:33:17.905825 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-19 00:33:17.905833 | orchestrator | Thursday 19 March 2026 00:33:17 +0000 (0:00:00.043) 0:06:34.106 ******** 2026-03-19 00:33:17.905842 | orchestrator | 2026-03-19 00:33:17.905850 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-19 00:33:17.905859 | orchestrator | Thursday 19 March 2026 00:33:17 +0000 (0:00:00.179) 0:06:34.285 ******** 2026-03-19 00:33:17.905867 | orchestrator | 2026-03-19 00:33:17.905876 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-19 00:33:17.905891 | orchestrator | Thursday 19 March 2026 00:33:17 +0000 (0:00:00.040) 0:06:34.325 ******** 2026-03-19 00:33:43.818325 | orchestrator | 2026-03-19 00:33:43.818439 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-19 00:33:43.818484 | orchestrator | Thursday 19 March 2026 00:33:17 +0000 (0:00:00.040) 0:06:34.365 ******** 2026-03-19 00:33:43.818496 | orchestrator | 2026-03-19 00:33:43.818506 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-19 00:33:43.818515 | orchestrator | Thursday 19 March 2026 00:33:17 +0000 (0:00:00.047) 0:06:34.413 ******** 2026-03-19 00:33:43.818525 | orchestrator | 2026-03-19 00:33:43.818534 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-19 00:33:43.818543 | orchestrator | Thursday 19 March 2026 00:33:18 +0000 (0:00:00.050) 0:06:34.464 ******** 2026-03-19 00:33:43.818553 | orchestrator | 2026-03-19 00:33:43.818562 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-19 00:33:43.818572 | orchestrator | Thursday 19 March 2026 00:33:18 +0000 (0:00:00.041) 0:06:34.506 ******** 2026-03-19 00:33:43.818581 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:33:43.818591 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:33:43.818601 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:33:43.818610 | orchestrator | 2026-03-19 00:33:43.818643 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-19 00:33:43.818653 | orchestrator | Thursday 19 March 2026 00:33:19 +0000 (0:00:01.212) 0:06:35.718 ******** 2026-03-19 00:33:43.818767 | orchestrator | changed: [testbed-manager] 2026-03-19 00:33:43.818778 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:33:43.818788 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:33:43.818797 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:33:43.818807 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:33:43.818890 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:33:43.818903 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:33:43.818913 | orchestrator | 2026-03-19 00:33:43.818924 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-19 00:33:43.818938 | orchestrator | Thursday 19 March 2026 00:33:20 +0000 (0:00:01.307) 0:06:37.026 ******** 2026-03-19 00:33:43.818955 | orchestrator | changed: [testbed-manager] 2026-03-19 00:33:43.818972 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:33:43.818989 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:33:43.819007 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:33:43.819025 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:33:43.819044 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:33:43.819062 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:33:43.819079 | orchestrator | 2026-03-19 00:33:43.819089 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-19 00:33:43.819099 | orchestrator | Thursday 19 March 2026 00:33:21 +0000 (0:00:01.246) 0:06:38.272 ******** 2026-03-19 00:33:43.819108 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:33:43.819118 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:33:43.819127 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:33:43.819136 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:33:43.819146 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:33:43.819155 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:33:43.819165 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:33:43.819174 | orchestrator | 2026-03-19 00:33:43.819202 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-19 00:33:43.819212 | orchestrator | Thursday 19 March 2026 00:33:24 +0000 (0:00:02.483) 0:06:40.755 ******** 2026-03-19 00:33:43.819221 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:33:43.819231 | orchestrator | 2026-03-19 00:33:43.819240 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-19 00:33:43.819250 | orchestrator | Thursday 19 March 2026 00:33:24 +0000 (0:00:00.091) 0:06:40.847 ******** 2026-03-19 00:33:43.819259 | orchestrator | ok: [testbed-manager] 2026-03-19 00:33:43.819269 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:33:43.819278 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:33:43.819287 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:33:43.819309 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:33:43.819318 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:33:43.819328 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:33:43.819337 | orchestrator | 2026-03-19 00:33:43.819347 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-19 00:33:43.819357 | orchestrator | Thursday 19 March 2026 00:33:25 +0000 (0:00:01.184) 0:06:42.031 ******** 2026-03-19 00:33:43.819367 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:33:43.819376 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:33:43.819388 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:33:43.819404 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:33:43.819419 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:33:43.819434 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:33:43.819449 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:33:43.819465 | orchestrator | 2026-03-19 00:33:43.819480 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-19 00:33:43.819559 | orchestrator | Thursday 19 March 2026 00:33:26 +0000 (0:00:00.527) 0:06:42.558 ******** 2026-03-19 00:33:43.819578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:33:43.819596 | orchestrator | 2026-03-19 00:33:43.819614 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-19 00:33:43.819627 | orchestrator | Thursday 19 March 2026 00:33:26 +0000 (0:00:00.868) 0:06:43.427 ******** 2026-03-19 00:33:43.819636 | orchestrator | ok: [testbed-manager] 2026-03-19 00:33:43.819646 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:33:43.819655 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:33:43.819688 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:33:43.819697 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:33:43.819707 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:33:43.819716 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:33:43.819725 | orchestrator | 2026-03-19 00:33:43.819735 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-19 00:33:43.819744 | orchestrator | Thursday 19 March 2026 00:33:28 +0000 (0:00:01.037) 0:06:44.465 ******** 2026-03-19 00:33:43.819754 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-19 00:33:43.819787 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-19 00:33:43.819798 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-19 00:33:43.819807 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-19 00:33:43.819817 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-19 00:33:43.819826 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-19 00:33:43.819836 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-19 00:33:43.819845 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-19 00:33:43.819855 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-19 00:33:43.819864 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-19 00:33:43.819874 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-19 00:33:43.819883 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-19 00:33:43.819892 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-19 00:33:43.819902 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-19 00:33:43.819911 | orchestrator | 2026-03-19 00:33:43.819921 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-19 00:33:43.819930 | orchestrator | Thursday 19 March 2026 00:33:30 +0000 (0:00:02.479) 0:06:46.945 ******** 2026-03-19 00:33:43.819940 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:33:43.819949 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:33:43.819958 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:33:43.819978 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:33:43.819987 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:33:43.819997 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:33:43.820006 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:33:43.820016 | orchestrator | 2026-03-19 00:33:43.820025 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-19 00:33:43.820035 | orchestrator | Thursday 19 March 2026 00:33:31 +0000 (0:00:00.501) 0:06:47.446 ******** 2026-03-19 00:33:43.820046 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:33:43.820058 | orchestrator | 2026-03-19 00:33:43.820067 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-19 00:33:43.820077 | orchestrator | Thursday 19 March 2026 00:33:31 +0000 (0:00:00.927) 0:06:48.374 ******** 2026-03-19 00:33:43.820086 | orchestrator | ok: [testbed-manager] 2026-03-19 00:33:43.820097 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:33:43.820113 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:33:43.820129 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:33:43.820145 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:33:43.820162 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:33:43.820179 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:33:43.820196 | orchestrator | 2026-03-19 00:33:43.820231 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-19 00:33:43.820241 | orchestrator | Thursday 19 March 2026 00:33:32 +0000 (0:00:00.844) 0:06:49.219 ******** 2026-03-19 00:33:43.820251 | orchestrator | ok: [testbed-manager] 2026-03-19 00:33:43.820271 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:33:43.820281 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:33:43.820290 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:33:43.820300 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:33:43.820309 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:33:43.820319 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:33:43.820328 | orchestrator | 2026-03-19 00:33:43.820338 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-19 00:33:43.820347 | orchestrator | Thursday 19 March 2026 00:33:33 +0000 (0:00:00.782) 0:06:50.001 ******** 2026-03-19 00:33:43.820357 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:33:43.820367 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:33:43.820376 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:33:43.820386 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:33:43.820398 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:33:43.820414 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:33:43.820451 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:33:43.820468 | orchestrator | 2026-03-19 00:33:43.820481 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-19 00:33:43.820491 | orchestrator | Thursday 19 March 2026 00:33:34 +0000 (0:00:00.494) 0:06:50.495 ******** 2026-03-19 00:33:43.820501 | orchestrator | ok: [testbed-manager] 2026-03-19 00:33:43.820510 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:33:43.820519 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:33:43.820529 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:33:43.820538 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:33:43.820547 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:33:43.820556 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:33:43.820566 | orchestrator | 2026-03-19 00:33:43.820575 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-19 00:33:43.820584 | orchestrator | Thursday 19 March 2026 00:33:35 +0000 (0:00:01.447) 0:06:51.943 ******** 2026-03-19 00:33:43.820594 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:33:43.820603 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:33:43.820612 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:33:43.820622 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:33:43.820631 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:33:43.820648 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:33:43.820680 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:33:43.820691 | orchestrator | 2026-03-19 00:33:43.820701 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-19 00:33:43.820711 | orchestrator | Thursday 19 March 2026 00:33:36 +0000 (0:00:00.662) 0:06:52.606 ******** 2026-03-19 00:33:43.820720 | orchestrator | ok: [testbed-manager] 2026-03-19 00:33:43.820730 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:33:43.820739 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:33:43.820749 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:33:43.820758 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:33:43.820767 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:33:43.820784 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:34:15.714402 | orchestrator | 2026-03-19 00:34:15.714543 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-19 00:34:15.714565 | orchestrator | Thursday 19 March 2026 00:33:43 +0000 (0:00:07.699) 0:07:00.305 ******** 2026-03-19 00:34:15.714579 | orchestrator | ok: [testbed-manager] 2026-03-19 00:34:15.714594 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:34:15.714609 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:34:15.714623 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:34:15.714636 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:34:15.714771 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:34:15.714782 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:34:15.714791 | orchestrator | 2026-03-19 00:34:15.714799 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-19 00:34:15.714807 | orchestrator | Thursday 19 March 2026 00:33:45 +0000 (0:00:01.333) 0:07:01.638 ******** 2026-03-19 00:34:15.714815 | orchestrator | ok: [testbed-manager] 2026-03-19 00:34:15.714824 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:34:15.714832 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:34:15.714840 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:34:15.714847 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:34:15.714856 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:34:15.714864 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:34:15.714872 | orchestrator | 2026-03-19 00:34:15.714890 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-19 00:34:15.714898 | orchestrator | Thursday 19 March 2026 00:33:46 +0000 (0:00:01.713) 0:07:03.352 ******** 2026-03-19 00:34:15.714906 | orchestrator | ok: [testbed-manager] 2026-03-19 00:34:15.714914 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:34:15.714922 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:34:15.714930 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:34:15.714939 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:34:15.714948 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:34:15.714958 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:34:15.714967 | orchestrator | 2026-03-19 00:34:15.714976 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-19 00:34:15.714985 | orchestrator | Thursday 19 March 2026 00:33:48 +0000 (0:00:01.802) 0:07:05.154 ******** 2026-03-19 00:34:15.714995 | orchestrator | ok: [testbed-manager] 2026-03-19 00:34:15.715004 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:34:15.715014 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:34:15.715023 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:34:15.715032 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:34:15.715041 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:34:15.715050 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:34:15.715059 | orchestrator | 2026-03-19 00:34:15.715068 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-19 00:34:15.715077 | orchestrator | Thursday 19 March 2026 00:33:49 +0000 (0:00:00.861) 0:07:06.016 ******** 2026-03-19 00:34:15.715086 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:34:15.715095 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:34:15.715104 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:34:15.715143 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:34:15.715152 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:34:15.715161 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:34:15.715174 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:34:15.715188 | orchestrator | 2026-03-19 00:34:15.715200 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-19 00:34:15.715213 | orchestrator | Thursday 19 March 2026 00:33:50 +0000 (0:00:00.764) 0:07:06.781 ******** 2026-03-19 00:34:15.715228 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:34:15.715242 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:34:15.715255 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:34:15.715266 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:34:15.715274 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:34:15.715282 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:34:15.715289 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:34:15.715297 | orchestrator | 2026-03-19 00:34:15.715305 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-19 00:34:15.715313 | orchestrator | Thursday 19 March 2026 00:33:51 +0000 (0:00:00.651) 0:07:07.433 ******** 2026-03-19 00:34:15.715321 | orchestrator | ok: [testbed-manager] 2026-03-19 00:34:15.715329 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:34:15.715337 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:34:15.715345 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:34:15.715352 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:34:15.715360 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:34:15.715368 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:34:15.715375 | orchestrator | 2026-03-19 00:34:15.715383 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-19 00:34:15.715391 | orchestrator | Thursday 19 March 2026 00:33:51 +0000 (0:00:00.495) 0:07:07.928 ******** 2026-03-19 00:34:15.715399 | orchestrator | ok: [testbed-manager] 2026-03-19 00:34:15.715407 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:34:15.715414 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:34:15.715422 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:34:15.715430 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:34:15.715437 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:34:15.715445 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:34:15.715453 | orchestrator | 2026-03-19 00:34:15.715461 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-19 00:34:15.715468 | orchestrator | Thursday 19 March 2026 00:33:51 +0000 (0:00:00.495) 0:07:08.423 ******** 2026-03-19 00:34:15.715476 | orchestrator | ok: [testbed-manager] 2026-03-19 00:34:15.715484 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:34:15.715492 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:34:15.715499 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:34:15.715507 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:34:15.715515 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:34:15.715523 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:34:15.715530 | orchestrator | 2026-03-19 00:34:15.715538 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-19 00:34:15.715546 | orchestrator | Thursday 19 March 2026 00:33:52 +0000 (0:00:00.498) 0:07:08.921 ******** 2026-03-19 00:34:15.715554 | orchestrator | ok: [testbed-manager] 2026-03-19 00:34:15.715561 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:34:15.715569 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:34:15.715577 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:34:15.715585 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:34:15.715592 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:34:15.715618 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:34:15.715626 | orchestrator | 2026-03-19 00:34:15.715677 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-19 00:34:15.715687 | orchestrator | Thursday 19 March 2026 00:33:57 +0000 (0:00:04.937) 0:07:13.859 ******** 2026-03-19 00:34:15.715704 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:34:15.715712 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:34:15.715730 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:34:15.715738 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:34:15.715746 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:34:15.715753 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:34:15.715761 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:34:15.715769 | orchestrator | 2026-03-19 00:34:15.715777 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-19 00:34:15.715785 | orchestrator | Thursday 19 March 2026 00:33:58 +0000 (0:00:00.726) 0:07:14.586 ******** 2026-03-19 00:34:15.715795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:34:15.715805 | orchestrator | 2026-03-19 00:34:15.715813 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-19 00:34:15.715821 | orchestrator | Thursday 19 March 2026 00:33:58 +0000 (0:00:00.785) 0:07:15.371 ******** 2026-03-19 00:34:15.715828 | orchestrator | ok: [testbed-manager] 2026-03-19 00:34:15.715836 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:34:15.715844 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:34:15.715852 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:34:15.715860 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:34:15.715868 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:34:15.715875 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:34:15.715883 | orchestrator | 2026-03-19 00:34:15.715891 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-19 00:34:15.715899 | orchestrator | Thursday 19 March 2026 00:34:01 +0000 (0:00:02.059) 0:07:17.431 ******** 2026-03-19 00:34:15.715906 | orchestrator | ok: [testbed-manager] 2026-03-19 00:34:15.715914 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:34:15.715922 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:34:15.715930 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:34:15.715937 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:34:15.715945 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:34:15.715952 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:34:15.715960 | orchestrator | 2026-03-19 00:34:15.715968 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-19 00:34:15.715976 | orchestrator | Thursday 19 March 2026 00:34:02 +0000 (0:00:01.341) 0:07:18.772 ******** 2026-03-19 00:34:15.715984 | orchestrator | ok: [testbed-manager] 2026-03-19 00:34:15.715992 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:34:15.715999 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:34:15.716007 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:34:15.716015 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:34:15.716023 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:34:15.716030 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:34:15.716038 | orchestrator | 2026-03-19 00:34:15.716046 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-19 00:34:15.716058 | orchestrator | Thursday 19 March 2026 00:34:03 +0000 (0:00:00.856) 0:07:19.629 ******** 2026-03-19 00:34:15.716067 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-19 00:34:15.716076 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-19 00:34:15.716084 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-19 00:34:15.716093 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-19 00:34:15.716101 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-19 00:34:15.716109 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-19 00:34:15.716126 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-19 00:34:15.716133 | orchestrator | 2026-03-19 00:34:15.716141 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-19 00:34:15.716149 | orchestrator | Thursday 19 March 2026 00:34:04 +0000 (0:00:01.646) 0:07:21.275 ******** 2026-03-19 00:34:15.716158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:34:15.716166 | orchestrator | 2026-03-19 00:34:15.716179 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-19 00:34:15.716192 | orchestrator | Thursday 19 March 2026 00:34:05 +0000 (0:00:00.940) 0:07:22.215 ******** 2026-03-19 00:34:15.716207 | orchestrator | changed: [testbed-manager] 2026-03-19 00:34:15.716220 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:34:15.716234 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:34:15.716248 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:34:15.716262 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:34:15.716276 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:34:15.716285 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:34:15.716293 | orchestrator | 2026-03-19 00:34:15.716307 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-19 00:34:47.094540 | orchestrator | Thursday 19 March 2026 00:34:15 +0000 (0:00:09.917) 0:07:32.133 ******** 2026-03-19 00:34:47.094624 | orchestrator | ok: [testbed-manager] 2026-03-19 00:34:47.094689 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:34:47.094696 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:34:47.094703 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:34:47.094709 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:34:47.094715 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:34:47.094722 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:34:47.094738 | orchestrator | 2026-03-19 00:34:47.094752 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-19 00:34:47.094757 | orchestrator | Thursday 19 March 2026 00:34:17 +0000 (0:00:01.882) 0:07:34.016 ******** 2026-03-19 00:34:47.094761 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:34:47.094765 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:34:47.094769 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:34:47.094773 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:34:47.094777 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:34:47.094781 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:34:47.094785 | orchestrator | 2026-03-19 00:34:47.094789 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-19 00:34:47.094793 | orchestrator | Thursday 19 March 2026 00:34:19 +0000 (0:00:01.478) 0:07:35.495 ******** 2026-03-19 00:34:47.094797 | orchestrator | changed: [testbed-manager] 2026-03-19 00:34:47.094802 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:34:47.094805 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:34:47.094809 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:34:47.094813 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:34:47.094816 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:34:47.094820 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:34:47.094824 | orchestrator | 2026-03-19 00:34:47.094827 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-19 00:34:47.094831 | orchestrator | 2026-03-19 00:34:47.094835 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-19 00:34:47.094839 | orchestrator | Thursday 19 March 2026 00:34:20 +0000 (0:00:01.221) 0:07:36.716 ******** 2026-03-19 00:34:47.094842 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:34:47.094846 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:34:47.094869 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:34:47.094873 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:34:47.094877 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:34:47.094881 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:34:47.094884 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:34:47.094888 | orchestrator | 2026-03-19 00:34:47.094893 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-19 00:34:47.094899 | orchestrator | 2026-03-19 00:34:47.094905 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-19 00:34:47.094912 | orchestrator | Thursday 19 March 2026 00:34:20 +0000 (0:00:00.524) 0:07:37.241 ******** 2026-03-19 00:34:47.094917 | orchestrator | changed: [testbed-manager] 2026-03-19 00:34:47.094923 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:34:47.094929 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:34:47.094935 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:34:47.094941 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:34:47.094963 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:34:47.094969 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:34:47.094976 | orchestrator | 2026-03-19 00:34:47.094982 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-19 00:34:47.094988 | orchestrator | Thursday 19 March 2026 00:34:22 +0000 (0:00:01.293) 0:07:38.535 ******** 2026-03-19 00:34:47.094992 | orchestrator | ok: [testbed-manager] 2026-03-19 00:34:47.094996 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:34:47.094999 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:34:47.095003 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:34:47.095007 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:34:47.095010 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:34:47.095014 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:34:47.095021 | orchestrator | 2026-03-19 00:34:47.095027 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-19 00:34:47.095033 | orchestrator | Thursday 19 March 2026 00:34:23 +0000 (0:00:01.633) 0:07:40.169 ******** 2026-03-19 00:34:47.095039 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:34:47.095045 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:34:47.095051 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:34:47.095057 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:34:47.095063 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:34:47.095074 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:34:47.095081 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:34:47.095085 | orchestrator | 2026-03-19 00:34:47.095089 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-19 00:34:47.095093 | orchestrator | Thursday 19 March 2026 00:34:24 +0000 (0:00:00.510) 0:07:40.679 ******** 2026-03-19 00:34:47.095097 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:34:47.095103 | orchestrator | 2026-03-19 00:34:47.095108 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-19 00:34:47.095112 | orchestrator | Thursday 19 March 2026 00:34:25 +0000 (0:00:00.797) 0:07:41.477 ******** 2026-03-19 00:34:47.095119 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:34:47.095125 | orchestrator | 2026-03-19 00:34:47.095130 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-19 00:34:47.095134 | orchestrator | Thursday 19 March 2026 00:34:25 +0000 (0:00:00.907) 0:07:42.385 ******** 2026-03-19 00:34:47.095139 | orchestrator | changed: [testbed-manager] 2026-03-19 00:34:47.095143 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:34:47.095148 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:34:47.095152 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:34:47.095161 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:34:47.095166 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:34:47.095170 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:34:47.095174 | orchestrator | 2026-03-19 00:34:47.095192 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-19 00:34:47.095196 | orchestrator | Thursday 19 March 2026 00:34:35 +0000 (0:00:09.708) 0:07:52.093 ******** 2026-03-19 00:34:47.095201 | orchestrator | changed: [testbed-manager] 2026-03-19 00:34:47.095205 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:34:47.095210 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:34:47.095214 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:34:47.095218 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:34:47.095222 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:34:47.095227 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:34:47.095231 | orchestrator | 2026-03-19 00:34:47.095236 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-19 00:34:47.095240 | orchestrator | Thursday 19 March 2026 00:34:36 +0000 (0:00:00.807) 0:07:52.901 ******** 2026-03-19 00:34:47.095246 | orchestrator | changed: [testbed-manager] 2026-03-19 00:34:47.095252 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:34:47.095262 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:34:47.095269 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:34:47.095275 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:34:47.095280 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:34:47.095286 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:34:47.095292 | orchestrator | 2026-03-19 00:34:47.095297 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-19 00:34:47.095304 | orchestrator | Thursday 19 March 2026 00:34:37 +0000 (0:00:01.317) 0:07:54.218 ******** 2026-03-19 00:34:47.095310 | orchestrator | changed: [testbed-manager] 2026-03-19 00:34:47.095317 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:34:47.095323 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:34:47.095329 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:34:47.095336 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:34:47.095341 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:34:47.095345 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:34:47.095349 | orchestrator | 2026-03-19 00:34:47.095354 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-19 00:34:47.095358 | orchestrator | Thursday 19 March 2026 00:34:39 +0000 (0:00:02.008) 0:07:56.227 ******** 2026-03-19 00:34:47.095362 | orchestrator | changed: [testbed-manager] 2026-03-19 00:34:47.095366 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:34:47.095371 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:34:47.095375 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:34:47.095379 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:34:47.095383 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:34:47.095387 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:34:47.095392 | orchestrator | 2026-03-19 00:34:47.095396 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-19 00:34:47.095400 | orchestrator | Thursday 19 March 2026 00:34:41 +0000 (0:00:01.333) 0:07:57.560 ******** 2026-03-19 00:34:47.095405 | orchestrator | changed: [testbed-manager] 2026-03-19 00:34:47.095409 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:34:47.095413 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:34:47.095418 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:34:47.095422 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:34:47.095430 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:34:47.095435 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:34:47.095439 | orchestrator | 2026-03-19 00:34:47.095444 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-19 00:34:47.095448 | orchestrator | 2026-03-19 00:34:47.095452 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-19 00:34:47.095457 | orchestrator | Thursday 19 March 2026 00:34:42 +0000 (0:00:01.154) 0:07:58.715 ******** 2026-03-19 00:34:47.095467 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:34:47.095471 | orchestrator | 2026-03-19 00:34:47.095475 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-19 00:34:47.095478 | orchestrator | Thursday 19 March 2026 00:34:43 +0000 (0:00:00.967) 0:07:59.683 ******** 2026-03-19 00:34:47.095482 | orchestrator | ok: [testbed-manager] 2026-03-19 00:34:47.095486 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:34:47.095489 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:34:47.095493 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:34:47.095497 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:34:47.095500 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:34:47.095504 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:34:47.095508 | orchestrator | 2026-03-19 00:34:47.095511 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-19 00:34:47.095515 | orchestrator | Thursday 19 March 2026 00:34:44 +0000 (0:00:00.855) 0:08:00.538 ******** 2026-03-19 00:34:47.095519 | orchestrator | changed: [testbed-manager] 2026-03-19 00:34:47.095523 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:34:47.095526 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:34:47.095530 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:34:47.095534 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:34:47.095537 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:34:47.095541 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:34:47.095545 | orchestrator | 2026-03-19 00:34:47.095548 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-19 00:34:47.095552 | orchestrator | Thursday 19 March 2026 00:34:45 +0000 (0:00:01.260) 0:08:01.799 ******** 2026-03-19 00:34:47.095556 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:34:47.095560 | orchestrator | 2026-03-19 00:34:47.095563 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-19 00:34:47.095567 | orchestrator | Thursday 19 March 2026 00:34:46 +0000 (0:00:00.807) 0:08:02.606 ******** 2026-03-19 00:34:47.095571 | orchestrator | ok: [testbed-manager] 2026-03-19 00:34:47.095575 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:34:47.095578 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:34:47.095582 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:34:47.095585 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:34:47.095589 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:34:47.095593 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:34:47.095596 | orchestrator | 2026-03-19 00:34:47.095604 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-19 00:34:48.686955 | orchestrator | Thursday 19 March 2026 00:34:47 +0000 (0:00:00.908) 0:08:03.514 ******** 2026-03-19 00:34:48.687045 | orchestrator | changed: [testbed-manager] 2026-03-19 00:34:48.687057 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:34:48.687064 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:34:48.687070 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:34:48.687077 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:34:48.687083 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:34:48.687089 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:34:48.687096 | orchestrator | 2026-03-19 00:34:48.687103 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:34:48.687110 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-19 00:34:48.687117 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-19 00:34:48.687124 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-19 00:34:48.687153 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-19 00:34:48.687160 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-19 00:34:48.687166 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-19 00:34:48.687172 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-19 00:34:48.687178 | orchestrator | 2026-03-19 00:34:48.687184 | orchestrator | 2026-03-19 00:34:48.687190 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:34:48.687197 | orchestrator | Thursday 19 March 2026 00:34:48 +0000 (0:00:01.305) 0:08:04.820 ******** 2026-03-19 00:34:48.687203 | orchestrator | =============================================================================== 2026-03-19 00:34:48.687209 | orchestrator | osism.commons.packages : Install required packages --------------------- 69.88s 2026-03-19 00:34:48.687215 | orchestrator | osism.commons.packages : Download required packages -------------------- 35.34s 2026-03-19 00:34:48.687221 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.49s 2026-03-19 00:34:48.687240 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.02s 2026-03-19 00:34:48.687246 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.87s 2026-03-19 00:34:48.687252 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.67s 2026-03-19 00:34:48.687258 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.04s 2026-03-19 00:34:48.687265 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.99s 2026-03-19 00:34:48.687271 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.92s 2026-03-19 00:34:48.687279 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.81s 2026-03-19 00:34:48.687289 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.71s 2026-03-19 00:34:48.687299 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.69s 2026-03-19 00:34:48.687315 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.66s 2026-03-19 00:34:48.687326 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.86s 2026-03-19 00:34:48.687335 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.61s 2026-03-19 00:34:48.687345 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.70s 2026-03-19 00:34:48.687355 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.64s 2026-03-19 00:34:48.687365 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.18s 2026-03-19 00:34:48.687375 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.92s 2026-03-19 00:34:48.687385 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.23s 2026-03-19 00:34:48.859827 | orchestrator | + osism apply fail2ban 2026-03-19 00:35:00.390224 | orchestrator | 2026-03-19 00:35:00 | INFO  | Prepare task for execution of fail2ban. 2026-03-19 00:35:00.493755 | orchestrator | 2026-03-19 00:35:00 | INFO  | Task 3594493d-5e10-4028-a297-941161639f14 (fail2ban) was prepared for execution. 2026-03-19 00:35:00.493865 | orchestrator | 2026-03-19 00:35:00 | INFO  | It takes a moment until task 3594493d-5e10-4028-a297-941161639f14 (fail2ban) has been started and output is visible here. 2026-03-19 00:35:21.071976 | orchestrator | 2026-03-19 00:35:21.072097 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-19 00:35:21.072145 | orchestrator | 2026-03-19 00:35:21.072159 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-19 00:35:21.072171 | orchestrator | Thursday 19 March 2026 00:35:03 +0000 (0:00:00.354) 0:00:00.354 ******** 2026-03-19 00:35:21.072185 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:35:21.072199 | orchestrator | 2026-03-19 00:35:21.072211 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-19 00:35:21.072222 | orchestrator | Thursday 19 March 2026 00:35:05 +0000 (0:00:01.142) 0:00:01.497 ******** 2026-03-19 00:35:21.072233 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:35:21.072246 | orchestrator | changed: [testbed-manager] 2026-03-19 00:35:21.072257 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:35:21.072267 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:35:21.072278 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:35:21.072289 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:35:21.072299 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:35:21.072310 | orchestrator | 2026-03-19 00:35:21.072321 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-19 00:35:21.072332 | orchestrator | Thursday 19 March 2026 00:35:16 +0000 (0:00:11.143) 0:00:12.640 ******** 2026-03-19 00:35:21.072343 | orchestrator | changed: [testbed-manager] 2026-03-19 00:35:21.072353 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:35:21.072364 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:35:21.072375 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:35:21.072385 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:35:21.072396 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:35:21.072406 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:35:21.072417 | orchestrator | 2026-03-19 00:35:21.072428 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-19 00:35:21.072439 | orchestrator | Thursday 19 March 2026 00:35:17 +0000 (0:00:01.614) 0:00:14.255 ******** 2026-03-19 00:35:21.072450 | orchestrator | ok: [testbed-manager] 2026-03-19 00:35:21.072462 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:35:21.072472 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:35:21.072483 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:35:21.072494 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:35:21.072505 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:35:21.072517 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:35:21.072530 | orchestrator | 2026-03-19 00:35:21.072543 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-19 00:35:21.072556 | orchestrator | Thursday 19 March 2026 00:35:19 +0000 (0:00:01.371) 0:00:15.626 ******** 2026-03-19 00:35:21.072568 | orchestrator | changed: [testbed-manager] 2026-03-19 00:35:21.072581 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:35:21.072594 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:35:21.072606 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:35:21.072641 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:35:21.072653 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:35:21.072665 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:35:21.072678 | orchestrator | 2026-03-19 00:35:21.072690 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:35:21.072719 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:35:21.072733 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:35:21.072745 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:35:21.072758 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:35:21.072780 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:35:21.072793 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:35:21.072805 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:35:21.072818 | orchestrator | 2026-03-19 00:35:21.072830 | orchestrator | 2026-03-19 00:35:21.072842 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:35:21.072855 | orchestrator | Thursday 19 March 2026 00:35:20 +0000 (0:00:01.618) 0:00:17.245 ******** 2026-03-19 00:35:21.072868 | orchestrator | =============================================================================== 2026-03-19 00:35:21.072879 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.14s 2026-03-19 00:35:21.072890 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.62s 2026-03-19 00:35:21.072900 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.61s 2026-03-19 00:35:21.072911 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.37s 2026-03-19 00:35:21.072922 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.14s 2026-03-19 00:35:21.201961 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-19 00:35:21.202135 | orchestrator | + osism apply network 2026-03-19 00:35:32.446465 | orchestrator | 2026-03-19 00:35:32 | INFO  | Prepare task for execution of network. 2026-03-19 00:35:32.519978 | orchestrator | 2026-03-19 00:35:32 | INFO  | Task 543a3ffb-824c-4861-b052-786f65783c1c (network) was prepared for execution. 2026-03-19 00:35:32.520139 | orchestrator | 2026-03-19 00:35:32 | INFO  | It takes a moment until task 543a3ffb-824c-4861-b052-786f65783c1c (network) has been started and output is visible here. 2026-03-19 00:35:58.978727 | orchestrator | 2026-03-19 00:35:58.978834 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-19 00:35:58.978848 | orchestrator | 2026-03-19 00:35:58.978859 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-19 00:35:58.978868 | orchestrator | Thursday 19 March 2026 00:35:35 +0000 (0:00:00.349) 0:00:00.349 ******** 2026-03-19 00:35:58.978877 | orchestrator | ok: [testbed-manager] 2026-03-19 00:35:58.978887 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:35:58.978896 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:35:58.978904 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:35:58.978914 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:35:58.978928 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:35:58.978949 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:35:58.978991 | orchestrator | 2026-03-19 00:35:58.979004 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-19 00:35:58.979019 | orchestrator | Thursday 19 March 2026 00:35:36 +0000 (0:00:00.614) 0:00:00.964 ******** 2026-03-19 00:35:58.979035 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:35:58.979052 | orchestrator | 2026-03-19 00:35:58.979066 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-19 00:35:58.979079 | orchestrator | Thursday 19 March 2026 00:35:37 +0000 (0:00:01.176) 0:00:02.141 ******** 2026-03-19 00:35:58.979092 | orchestrator | ok: [testbed-manager] 2026-03-19 00:35:58.979106 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:35:58.979120 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:35:58.979133 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:35:58.979148 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:35:58.979194 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:35:58.979209 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:35:58.979218 | orchestrator | 2026-03-19 00:35:58.979227 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-19 00:35:58.979236 | orchestrator | Thursday 19 March 2026 00:35:40 +0000 (0:00:02.441) 0:00:04.583 ******** 2026-03-19 00:35:58.979246 | orchestrator | ok: [testbed-manager] 2026-03-19 00:35:58.979256 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:35:58.979265 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:35:58.979275 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:35:58.979284 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:35:58.979294 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:35:58.979303 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:35:58.979313 | orchestrator | 2026-03-19 00:35:58.979323 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-19 00:35:58.979332 | orchestrator | Thursday 19 March 2026 00:35:41 +0000 (0:00:01.583) 0:00:06.166 ******** 2026-03-19 00:35:58.979342 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-19 00:35:58.979353 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-19 00:35:58.979364 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-19 00:35:58.979374 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-19 00:35:58.979384 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-19 00:35:58.979394 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-19 00:35:58.979404 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-19 00:35:58.979413 | orchestrator | 2026-03-19 00:35:58.979423 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-19 00:35:58.979433 | orchestrator | Thursday 19 March 2026 00:35:42 +0000 (0:00:01.016) 0:00:07.183 ******** 2026-03-19 00:35:58.979443 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 00:35:58.979454 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-19 00:35:58.979463 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-19 00:35:58.979473 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 00:35:58.979482 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 00:35:58.979491 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 00:35:58.979501 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 00:35:58.979511 | orchestrator | 2026-03-19 00:35:58.979521 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-19 00:35:58.979530 | orchestrator | Thursday 19 March 2026 00:35:45 +0000 (0:00:02.889) 0:00:10.072 ******** 2026-03-19 00:35:58.979541 | orchestrator | changed: [testbed-manager] 2026-03-19 00:35:58.979550 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:35:58.979560 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:35:58.979570 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:35:58.979580 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:35:58.979590 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:35:58.979599 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:35:58.979608 | orchestrator | 2026-03-19 00:35:58.979676 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-19 00:35:58.979685 | orchestrator | Thursday 19 March 2026 00:35:47 +0000 (0:00:01.475) 0:00:11.548 ******** 2026-03-19 00:35:58.979694 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 00:35:58.979702 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 00:35:58.979711 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 00:35:58.979719 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-19 00:35:58.979728 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-19 00:35:58.979736 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 00:35:58.979744 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 00:35:58.979753 | orchestrator | 2026-03-19 00:35:58.979761 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-19 00:35:58.979770 | orchestrator | Thursday 19 March 2026 00:35:48 +0000 (0:00:01.791) 0:00:13.340 ******** 2026-03-19 00:35:58.979786 | orchestrator | ok: [testbed-manager] 2026-03-19 00:35:58.979795 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:35:58.979803 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:35:58.979812 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:35:58.979820 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:35:58.979829 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:35:58.979837 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:35:58.979845 | orchestrator | 2026-03-19 00:35:58.979854 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-19 00:35:58.979880 | orchestrator | Thursday 19 March 2026 00:35:49 +0000 (0:00:00.937) 0:00:14.277 ******** 2026-03-19 00:35:58.979890 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:35:58.979898 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:35:58.979907 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:35:58.979915 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:35:58.979924 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:35:58.979933 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:35:58.979941 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:35:58.979950 | orchestrator | 2026-03-19 00:35:58.979958 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-19 00:35:58.979967 | orchestrator | Thursday 19 March 2026 00:35:50 +0000 (0:00:00.750) 0:00:15.027 ******** 2026-03-19 00:35:58.979975 | orchestrator | ok: [testbed-manager] 2026-03-19 00:35:58.979984 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:35:58.979992 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:35:58.980001 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:35:58.980009 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:35:58.980017 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:35:58.980026 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:35:58.980034 | orchestrator | 2026-03-19 00:35:58.980043 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-19 00:35:58.980051 | orchestrator | Thursday 19 March 2026 00:35:52 +0000 (0:00:01.975) 0:00:17.003 ******** 2026-03-19 00:35:58.980060 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:35:58.980068 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:35:58.980077 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:35:58.980091 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:35:58.980106 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:35:58.980120 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:35:58.980134 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-03-19 00:35:58.980150 | orchestrator | 2026-03-19 00:35:58.980163 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-19 00:35:58.980178 | orchestrator | Thursday 19 March 2026 00:35:53 +0000 (0:00:00.874) 0:00:17.878 ******** 2026-03-19 00:35:58.980189 | orchestrator | ok: [testbed-manager] 2026-03-19 00:35:58.980203 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:35:58.980217 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:35:58.980231 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:35:58.980245 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:35:58.980259 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:35:58.980274 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:35:58.980288 | orchestrator | 2026-03-19 00:35:58.980303 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-19 00:35:58.980316 | orchestrator | Thursday 19 March 2026 00:35:54 +0000 (0:00:01.415) 0:00:19.293 ******** 2026-03-19 00:35:58.980337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:35:58.980354 | orchestrator | 2026-03-19 00:35:58.980369 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-19 00:35:58.980382 | orchestrator | Thursday 19 March 2026 00:35:55 +0000 (0:00:01.187) 0:00:20.481 ******** 2026-03-19 00:35:58.980405 | orchestrator | ok: [testbed-manager] 2026-03-19 00:35:58.980420 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:35:58.980433 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:35:58.980447 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:35:58.980459 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:35:58.980472 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:35:58.980485 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:35:58.980499 | orchestrator | 2026-03-19 00:35:58.980512 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-19 00:35:58.980526 | orchestrator | Thursday 19 March 2026 00:35:57 +0000 (0:00:01.105) 0:00:21.586 ******** 2026-03-19 00:35:58.980539 | orchestrator | ok: [testbed-manager] 2026-03-19 00:35:58.980554 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:35:58.980566 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:35:58.980578 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:35:58.980591 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:35:58.980604 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:35:58.980649 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:35:58.980664 | orchestrator | 2026-03-19 00:35:58.980678 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-19 00:35:58.980691 | orchestrator | Thursday 19 March 2026 00:35:57 +0000 (0:00:00.807) 0:00:22.394 ******** 2026-03-19 00:35:58.980705 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-19 00:35:58.980719 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-19 00:35:58.980731 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-19 00:35:58.980744 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-19 00:35:58.980758 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-19 00:35:58.980772 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-19 00:35:58.980785 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-19 00:35:58.980798 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-19 00:35:58.980812 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-19 00:35:58.980825 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-19 00:35:58.980839 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-19 00:35:58.980852 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-19 00:35:58.980866 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-19 00:35:58.980879 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-19 00:35:58.980893 | orchestrator | 2026-03-19 00:35:58.980923 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-19 00:36:14.018245 | orchestrator | Thursday 19 March 2026 00:35:58 +0000 (0:00:01.075) 0:00:23.470 ******** 2026-03-19 00:36:14.018364 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:36:14.018382 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:36:14.018394 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:36:14.018405 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:36:14.018415 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:36:14.018426 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:36:14.018437 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:36:14.018448 | orchestrator | 2026-03-19 00:36:14.018460 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-19 00:36:14.018471 | orchestrator | Thursday 19 March 2026 00:35:59 +0000 (0:00:00.779) 0:00:24.249 ******** 2026-03-19 00:36:14.018484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-5, testbed-node-3, testbed-node-0, testbed-node-2, testbed-node-1, testbed-node-4 2026-03-19 00:36:14.018525 | orchestrator | 2026-03-19 00:36:14.018536 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-19 00:36:14.018548 | orchestrator | Thursday 19 March 2026 00:36:03 +0000 (0:00:04.088) 0:00:28.338 ******** 2026-03-19 00:36:14.018570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-19 00:36:14.018590 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-19 00:36:14.018611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-19 00:36:14.018746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-19 00:36:14.018769 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-19 00:36:14.018792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-19 00:36:14.018812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-19 00:36:14.018829 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-19 00:36:14.018843 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-19 00:36:14.018861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-19 00:36:14.018872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-19 00:36:14.018909 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-19 00:36:14.018927 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-19 00:36:14.018959 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-19 00:36:14.018979 | orchestrator | 2026-03-19 00:36:14.018999 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-19 00:36:14.019018 | orchestrator | Thursday 19 March 2026 00:36:08 +0000 (0:00:05.057) 0:00:33.396 ******** 2026-03-19 00:36:14.019037 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-19 00:36:14.019049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-19 00:36:14.019060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-19 00:36:14.019071 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-19 00:36:14.019089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-19 00:36:14.019101 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-19 00:36:14.019112 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-19 00:36:14.019123 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-19 00:36:14.019134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-19 00:36:14.019145 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-19 00:36:14.019156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-19 00:36:14.019167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-19 00:36:14.019199 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-19 00:36:26.842455 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-19 00:36:26.842604 | orchestrator | 2026-03-19 00:36:26.842774 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-19 00:36:26.842794 | orchestrator | Thursday 19 March 2026 00:36:14 +0000 (0:00:05.559) 0:00:38.955 ******** 2026-03-19 00:36:26.842813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:36:26.842830 | orchestrator | 2026-03-19 00:36:26.842846 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-19 00:36:26.842862 | orchestrator | Thursday 19 March 2026 00:36:15 +0000 (0:00:01.194) 0:00:40.150 ******** 2026-03-19 00:36:26.842879 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:36:26.842895 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:36:26.842979 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:36:26.843003 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:36:26.843022 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:36:26.843040 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:36:26.843057 | orchestrator | ok: [testbed-manager] 2026-03-19 00:36:26.843075 | orchestrator | 2026-03-19 00:36:26.843092 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-19 00:36:26.843114 | orchestrator | Thursday 19 March 2026 00:36:17 +0000 (0:00:01.564) 0:00:41.715 ******** 2026-03-19 00:36:26.843132 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-19 00:36:26.843153 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-19 00:36:26.843172 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-19 00:36:26.843192 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-19 00:36:26.843213 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:36:26.843234 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-19 00:36:26.843281 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-19 00:36:26.843304 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-19 00:36:26.843326 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-19 00:36:26.843344 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:36:26.843362 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-19 00:36:26.843379 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-19 00:36:26.843397 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-19 00:36:26.843415 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-19 00:36:26.843484 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:36:26.843503 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-19 00:36:26.843521 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-19 00:36:26.843540 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-19 00:36:26.843705 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-19 00:36:26.843727 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:36:26.843745 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-19 00:36:26.843763 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-19 00:36:26.843781 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-19 00:36:26.843799 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-19 00:36:26.843817 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:36:26.843835 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-19 00:36:26.843852 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-19 00:36:26.843870 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-19 00:36:26.843889 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-19 00:36:26.843907 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:36:26.843925 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-19 00:36:26.843943 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-19 00:36:26.843962 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-19 00:36:26.843981 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-19 00:36:26.843999 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:36:26.844018 | orchestrator | 2026-03-19 00:36:26.844037 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-19 00:36:26.844085 | orchestrator | Thursday 19 March 2026 00:36:17 +0000 (0:00:00.750) 0:00:42.465 ******** 2026-03-19 00:36:26.844109 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:36:26.844130 | orchestrator | 2026-03-19 00:36:26.844150 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-19 00:36:26.844170 | orchestrator | Thursday 19 March 2026 00:36:19 +0000 (0:00:01.221) 0:00:43.686 ******** 2026-03-19 00:36:26.844188 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:36:26.844209 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:36:26.844230 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:36:26.844248 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:36:26.844264 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:36:26.844280 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:36:26.844298 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:36:26.844315 | orchestrator | 2026-03-19 00:36:26.844332 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-19 00:36:26.844349 | orchestrator | Thursday 19 March 2026 00:36:19 +0000 (0:00:00.771) 0:00:44.458 ******** 2026-03-19 00:36:26.844366 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:36:26.844384 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:36:26.844401 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:36:26.844419 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:36:26.844436 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:36:26.844454 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:36:26.844471 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:36:26.844488 | orchestrator | 2026-03-19 00:36:26.844505 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-19 00:36:26.844521 | orchestrator | Thursday 19 March 2026 00:36:20 +0000 (0:00:00.600) 0:00:45.058 ******** 2026-03-19 00:36:26.844537 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:36:26.844571 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:36:26.844589 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:36:26.844607 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:36:26.844655 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:36:26.844672 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:36:26.844689 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:36:26.844705 | orchestrator | 2026-03-19 00:36:26.844721 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-19 00:36:26.844738 | orchestrator | Thursday 19 March 2026 00:36:21 +0000 (0:00:00.751) 0:00:45.810 ******** 2026-03-19 00:36:26.844754 | orchestrator | ok: [testbed-manager] 2026-03-19 00:36:26.844771 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:36:26.844800 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:36:26.844817 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:36:26.844833 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:36:26.844848 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:36:26.844864 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:36:26.844879 | orchestrator | 2026-03-19 00:36:26.844895 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-19 00:36:26.844911 | orchestrator | Thursday 19 March 2026 00:36:22 +0000 (0:00:01.428) 0:00:47.239 ******** 2026-03-19 00:36:26.844928 | orchestrator | ok: [testbed-manager] 2026-03-19 00:36:26.844944 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:36:26.844960 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:36:26.844975 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:36:26.844990 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:36:26.845005 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:36:26.845022 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:36:26.845037 | orchestrator | 2026-03-19 00:36:26.845053 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-19 00:36:26.845070 | orchestrator | Thursday 19 March 2026 00:36:23 +0000 (0:00:01.033) 0:00:48.272 ******** 2026-03-19 00:36:26.845086 | orchestrator | ok: [testbed-manager] 2026-03-19 00:36:26.845102 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:36:26.845118 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:36:26.845134 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:36:26.845150 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:36:26.845166 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:36:26.845182 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:36:26.845198 | orchestrator | 2026-03-19 00:36:26.845214 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-19 00:36:26.845230 | orchestrator | Thursday 19 March 2026 00:36:25 +0000 (0:00:01.929) 0:00:50.202 ******** 2026-03-19 00:36:26.845247 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:36:26.845263 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:36:26.845279 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:36:26.845295 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:36:26.845310 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:36:26.845326 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:36:26.845342 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:36:26.845359 | orchestrator | 2026-03-19 00:36:26.845375 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-19 00:36:26.845391 | orchestrator | Thursday 19 March 2026 00:36:26 +0000 (0:00:00.559) 0:00:50.762 ******** 2026-03-19 00:36:26.845407 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:36:26.845423 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:36:26.845438 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:36:26.845453 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:36:26.845470 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:36:26.845485 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:36:26.845500 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:36:26.845515 | orchestrator | 2026-03-19 00:36:26.845531 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:36:26.845548 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-19 00:36:26.845579 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-19 00:36:26.845639 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-19 00:36:27.010854 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-19 00:36:27.010929 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-19 00:36:27.010934 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-19 00:36:27.010945 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-19 00:36:27.010950 | orchestrator | 2026-03-19 00:36:27.010954 | orchestrator | 2026-03-19 00:36:27.010958 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:36:27.010964 | orchestrator | Thursday 19 March 2026 00:36:26 +0000 (0:00:00.573) 0:00:51.335 ******** 2026-03-19 00:36:27.010968 | orchestrator | =============================================================================== 2026-03-19 00:36:27.010972 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.56s 2026-03-19 00:36:27.010981 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.06s 2026-03-19 00:36:27.010986 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.09s 2026-03-19 00:36:27.010989 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.89s 2026-03-19 00:36:27.010993 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.44s 2026-03-19 00:36:27.010997 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.98s 2026-03-19 00:36:27.011001 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 1.93s 2026-03-19 00:36:27.011004 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.79s 2026-03-19 00:36:27.011008 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.58s 2026-03-19 00:36:27.011012 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.56s 2026-03-19 00:36:27.011016 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.48s 2026-03-19 00:36:27.011020 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.43s 2026-03-19 00:36:27.011024 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.42s 2026-03-19 00:36:27.011028 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.22s 2026-03-19 00:36:27.011031 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.19s 2026-03-19 00:36:27.011035 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.19s 2026-03-19 00:36:27.011039 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.18s 2026-03-19 00:36:27.011043 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.11s 2026-03-19 00:36:27.011047 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.08s 2026-03-19 00:36:27.011050 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.03s 2026-03-19 00:36:27.124433 | orchestrator | + osism apply wireguard 2026-03-19 00:36:38.235333 | orchestrator | 2026-03-19 00:36:38 | INFO  | Prepare task for execution of wireguard. 2026-03-19 00:36:38.309779 | orchestrator | 2026-03-19 00:36:38 | INFO  | Task 5644e67f-3c17-408c-be49-456198e5babf (wireguard) was prepared for execution. 2026-03-19 00:36:38.309891 | orchestrator | 2026-03-19 00:36:38 | INFO  | It takes a moment until task 5644e67f-3c17-408c-be49-456198e5babf (wireguard) has been started and output is visible here. 2026-03-19 00:36:56.846482 | orchestrator | 2026-03-19 00:36:56.846575 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-19 00:36:56.846586 | orchestrator | 2026-03-19 00:36:56.846592 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-19 00:36:56.846598 | orchestrator | Thursday 19 March 2026 00:36:41 +0000 (0:00:00.284) 0:00:00.284 ******** 2026-03-19 00:36:56.846604 | orchestrator | ok: [testbed-manager] 2026-03-19 00:36:56.846644 | orchestrator | 2026-03-19 00:36:56.846651 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-19 00:36:56.846656 | orchestrator | Thursday 19 March 2026 00:36:43 +0000 (0:00:01.730) 0:00:02.014 ******** 2026-03-19 00:36:56.846662 | orchestrator | changed: [testbed-manager] 2026-03-19 00:36:56.846668 | orchestrator | 2026-03-19 00:36:56.846674 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-19 00:36:56.846679 | orchestrator | Thursday 19 March 2026 00:36:49 +0000 (0:00:06.194) 0:00:08.208 ******** 2026-03-19 00:36:56.846685 | orchestrator | changed: [testbed-manager] 2026-03-19 00:36:56.846691 | orchestrator | 2026-03-19 00:36:56.846696 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-19 00:36:56.846701 | orchestrator | Thursday 19 March 2026 00:36:49 +0000 (0:00:00.532) 0:00:08.741 ******** 2026-03-19 00:36:56.846707 | orchestrator | changed: [testbed-manager] 2026-03-19 00:36:56.846712 | orchestrator | 2026-03-19 00:36:56.846717 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-19 00:36:56.846723 | orchestrator | Thursday 19 March 2026 00:36:50 +0000 (0:00:00.419) 0:00:09.160 ******** 2026-03-19 00:36:56.846728 | orchestrator | ok: [testbed-manager] 2026-03-19 00:36:56.846733 | orchestrator | 2026-03-19 00:36:56.846739 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-19 00:36:56.846744 | orchestrator | Thursday 19 March 2026 00:36:50 +0000 (0:00:00.561) 0:00:09.722 ******** 2026-03-19 00:36:56.846750 | orchestrator | ok: [testbed-manager] 2026-03-19 00:36:56.846755 | orchestrator | 2026-03-19 00:36:56.846760 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-19 00:36:56.846766 | orchestrator | Thursday 19 March 2026 00:36:51 +0000 (0:00:00.421) 0:00:10.143 ******** 2026-03-19 00:36:56.846771 | orchestrator | ok: [testbed-manager] 2026-03-19 00:36:56.846776 | orchestrator | 2026-03-19 00:36:56.846782 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-19 00:36:56.846787 | orchestrator | Thursday 19 March 2026 00:36:51 +0000 (0:00:00.407) 0:00:10.550 ******** 2026-03-19 00:36:56.846792 | orchestrator | changed: [testbed-manager] 2026-03-19 00:36:56.846798 | orchestrator | 2026-03-19 00:36:56.846803 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-19 00:36:56.846808 | orchestrator | Thursday 19 March 2026 00:36:52 +0000 (0:00:01.173) 0:00:11.723 ******** 2026-03-19 00:36:56.846814 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-19 00:36:56.846820 | orchestrator | changed: [testbed-manager] 2026-03-19 00:36:56.846825 | orchestrator | 2026-03-19 00:36:56.846830 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-19 00:36:56.846836 | orchestrator | Thursday 19 March 2026 00:36:53 +0000 (0:00:00.909) 0:00:12.633 ******** 2026-03-19 00:36:56.846862 | orchestrator | changed: [testbed-manager] 2026-03-19 00:36:56.846868 | orchestrator | 2026-03-19 00:36:56.846873 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-19 00:36:56.846879 | orchestrator | Thursday 19 March 2026 00:36:55 +0000 (0:00:01.959) 0:00:14.592 ******** 2026-03-19 00:36:56.846884 | orchestrator | changed: [testbed-manager] 2026-03-19 00:36:56.846890 | orchestrator | 2026-03-19 00:36:56.846895 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:36:56.846921 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:36:56.846928 | orchestrator | 2026-03-19 00:36:56.846933 | orchestrator | 2026-03-19 00:36:56.846938 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:36:56.846944 | orchestrator | Thursday 19 March 2026 00:36:56 +0000 (0:00:00.899) 0:00:15.492 ******** 2026-03-19 00:36:56.846949 | orchestrator | =============================================================================== 2026-03-19 00:36:56.846955 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.19s 2026-03-19 00:36:56.846963 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.96s 2026-03-19 00:36:56.846968 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.73s 2026-03-19 00:36:56.846974 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2026-03-19 00:36:56.846979 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2026-03-19 00:36:56.846984 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2026-03-19 00:36:56.846990 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.56s 2026-03-19 00:36:56.846995 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2026-03-19 00:36:56.847000 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2026-03-19 00:36:56.847005 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2026-03-19 00:36:56.847011 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-03-19 00:36:57.013984 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-19 00:36:57.048883 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-19 00:36:57.048992 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-19 00:36:57.130917 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 181 0 --:--:-- --:--:-- --:--:-- 182 2026-03-19 00:36:57.144019 | orchestrator | + osism apply --environment custom workarounds 2026-03-19 00:36:58.414190 | orchestrator | 2026-03-19 00:36:58 | INFO  | Trying to run play workarounds in environment custom 2026-03-19 00:37:08.471076 | orchestrator | 2026-03-19 00:37:08 | INFO  | Prepare task for execution of workarounds. 2026-03-19 00:37:08.550825 | orchestrator | 2026-03-19 00:37:08 | INFO  | Task 7409fd99-34dc-4688-99a0-fb7c77935446 (workarounds) was prepared for execution. 2026-03-19 00:37:08.550918 | orchestrator | 2026-03-19 00:37:08 | INFO  | It takes a moment until task 7409fd99-34dc-4688-99a0-fb7c77935446 (workarounds) has been started and output is visible here. 2026-03-19 00:37:32.827343 | orchestrator | 2026-03-19 00:37:32.827440 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 00:37:32.827454 | orchestrator | 2026-03-19 00:37:32.827464 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-19 00:37:32.827474 | orchestrator | Thursday 19 March 2026 00:37:11 +0000 (0:00:00.173) 0:00:00.173 ******** 2026-03-19 00:37:32.827485 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-19 00:37:32.827495 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-19 00:37:32.827504 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-19 00:37:32.827513 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-19 00:37:32.827523 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-19 00:37:32.827533 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-19 00:37:32.827543 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-19 00:37:32.827580 | orchestrator | 2026-03-19 00:37:32.827588 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-19 00:37:32.827594 | orchestrator | 2026-03-19 00:37:32.827623 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-19 00:37:32.827629 | orchestrator | Thursday 19 March 2026 00:37:12 +0000 (0:00:00.696) 0:00:00.869 ******** 2026-03-19 00:37:32.827635 | orchestrator | ok: [testbed-manager] 2026-03-19 00:37:32.827642 | orchestrator | 2026-03-19 00:37:32.827648 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-19 00:37:32.827653 | orchestrator | 2026-03-19 00:37:32.827659 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-19 00:37:32.827665 | orchestrator | Thursday 19 March 2026 00:37:14 +0000 (0:00:02.558) 0:00:03.427 ******** 2026-03-19 00:37:32.827670 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:37:32.827676 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:37:32.827682 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:37:32.827688 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:37:32.827693 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:37:32.827699 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:37:32.827704 | orchestrator | 2026-03-19 00:37:32.827710 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-19 00:37:32.827715 | orchestrator | 2026-03-19 00:37:32.827721 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-19 00:37:32.827727 | orchestrator | Thursday 19 March 2026 00:37:17 +0000 (0:00:02.338) 0:00:05.766 ******** 2026-03-19 00:37:32.827735 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-19 00:37:32.827747 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-19 00:37:32.827757 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-19 00:37:32.827767 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-19 00:37:32.827776 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-19 00:37:32.827802 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-19 00:37:32.827813 | orchestrator | 2026-03-19 00:37:32.827822 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-19 00:37:32.827832 | orchestrator | Thursday 19 March 2026 00:37:18 +0000 (0:00:01.295) 0:00:07.061 ******** 2026-03-19 00:37:32.827842 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:37:32.827852 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:37:32.827864 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:37:32.827873 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:37:32.827884 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:37:32.827890 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:37:32.827895 | orchestrator | 2026-03-19 00:37:32.827901 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-19 00:37:32.827908 | orchestrator | Thursday 19 March 2026 00:37:22 +0000 (0:00:03.832) 0:00:10.894 ******** 2026-03-19 00:37:32.827915 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:37:32.827922 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:37:32.827928 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:37:32.827935 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:37:32.827941 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:37:32.827947 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:37:32.827954 | orchestrator | 2026-03-19 00:37:32.827960 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-19 00:37:32.827967 | orchestrator | 2026-03-19 00:37:32.827973 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-19 00:37:32.827980 | orchestrator | Thursday 19 March 2026 00:37:22 +0000 (0:00:00.472) 0:00:11.366 ******** 2026-03-19 00:37:32.827992 | orchestrator | changed: [testbed-manager] 2026-03-19 00:37:32.827999 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:37:32.828006 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:37:32.828012 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:37:32.828019 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:37:32.828025 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:37:32.828032 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:37:32.828038 | orchestrator | 2026-03-19 00:37:32.828045 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-19 00:37:32.828051 | orchestrator | Thursday 19 March 2026 00:37:24 +0000 (0:00:01.716) 0:00:13.082 ******** 2026-03-19 00:37:32.828058 | orchestrator | changed: [testbed-manager] 2026-03-19 00:37:32.828065 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:37:32.828071 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:37:32.828077 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:37:32.828084 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:37:32.828090 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:37:32.828111 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:37:32.828120 | orchestrator | 2026-03-19 00:37:32.828131 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-19 00:37:32.828140 | orchestrator | Thursday 19 March 2026 00:37:26 +0000 (0:00:01.447) 0:00:14.530 ******** 2026-03-19 00:37:32.828150 | orchestrator | ok: [testbed-manager] 2026-03-19 00:37:32.828159 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:37:32.828168 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:37:32.828178 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:37:32.828187 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:37:32.828195 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:37:32.828203 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:37:32.828211 | orchestrator | 2026-03-19 00:37:32.828220 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-19 00:37:32.828228 | orchestrator | Thursday 19 March 2026 00:37:27 +0000 (0:00:01.535) 0:00:16.065 ******** 2026-03-19 00:37:32.828236 | orchestrator | changed: [testbed-manager] 2026-03-19 00:37:32.828245 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:37:32.828255 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:37:32.828265 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:37:32.828275 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:37:32.828285 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:37:32.828294 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:37:32.828303 | orchestrator | 2026-03-19 00:37:32.828309 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-19 00:37:32.828314 | orchestrator | Thursday 19 March 2026 00:37:29 +0000 (0:00:01.570) 0:00:17.636 ******** 2026-03-19 00:37:32.828320 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:37:32.828326 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:37:32.828331 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:37:32.828337 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:37:32.828342 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:37:32.828348 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:37:32.828353 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:37:32.828359 | orchestrator | 2026-03-19 00:37:32.828365 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-19 00:37:32.828370 | orchestrator | 2026-03-19 00:37:32.828376 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-19 00:37:32.828382 | orchestrator | Thursday 19 March 2026 00:37:29 +0000 (0:00:00.641) 0:00:18.278 ******** 2026-03-19 00:37:32.828387 | orchestrator | ok: [testbed-manager] 2026-03-19 00:37:32.828393 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:37:32.828399 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:37:32.828404 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:37:32.828410 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:37:32.828415 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:37:32.828427 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:37:32.828432 | orchestrator | 2026-03-19 00:37:32.828438 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:37:32.828445 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 00:37:32.828452 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:37:32.828458 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:37:32.828469 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:37:32.828475 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:37:32.828480 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:37:32.828486 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:37:32.828492 | orchestrator | 2026-03-19 00:37:32.828498 | orchestrator | 2026-03-19 00:37:32.828503 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:37:32.828509 | orchestrator | Thursday 19 March 2026 00:37:32 +0000 (0:00:03.001) 0:00:21.279 ******** 2026-03-19 00:37:32.828515 | orchestrator | =============================================================================== 2026-03-19 00:37:32.828521 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.83s 2026-03-19 00:37:32.828526 | orchestrator | Install python3-docker -------------------------------------------------- 3.00s 2026-03-19 00:37:32.828532 | orchestrator | Apply netplan configuration --------------------------------------------- 2.56s 2026-03-19 00:37:32.828537 | orchestrator | Apply netplan configuration --------------------------------------------- 2.34s 2026-03-19 00:37:32.828543 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.72s 2026-03-19 00:37:32.828549 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.57s 2026-03-19 00:37:32.828554 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.54s 2026-03-19 00:37:32.828560 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.45s 2026-03-19 00:37:32.828566 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.30s 2026-03-19 00:37:32.828571 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.70s 2026-03-19 00:37:32.828577 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2026-03-19 00:37:32.828588 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.47s 2026-03-19 00:37:33.149708 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-19 00:37:44.280099 | orchestrator | 2026-03-19 00:37:44 | INFO  | Prepare task for execution of reboot. 2026-03-19 00:37:44.355553 | orchestrator | 2026-03-19 00:37:44 | INFO  | Task 18f6dcde-17ee-4ffc-8a28-415cfb0323d7 (reboot) was prepared for execution. 2026-03-19 00:37:44.355672 | orchestrator | 2026-03-19 00:37:44 | INFO  | It takes a moment until task 18f6dcde-17ee-4ffc-8a28-415cfb0323d7 (reboot) has been started and output is visible here. 2026-03-19 00:37:55.522615 | orchestrator | 2026-03-19 00:37:55.522728 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-19 00:37:55.522743 | orchestrator | 2026-03-19 00:37:55.522753 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-19 00:37:55.522787 | orchestrator | Thursday 19 March 2026 00:37:47 +0000 (0:00:00.247) 0:00:00.247 ******** 2026-03-19 00:37:55.522797 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:37:55.522821 | orchestrator | 2026-03-19 00:37:55.522831 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-19 00:37:55.522840 | orchestrator | Thursday 19 March 2026 00:37:47 +0000 (0:00:00.142) 0:00:00.390 ******** 2026-03-19 00:37:55.522848 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:37:55.522857 | orchestrator | 2026-03-19 00:37:55.522866 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-19 00:37:55.522874 | orchestrator | Thursday 19 March 2026 00:37:49 +0000 (0:00:01.283) 0:00:01.674 ******** 2026-03-19 00:37:55.522883 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:37:55.522892 | orchestrator | 2026-03-19 00:37:55.522900 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-19 00:37:55.522909 | orchestrator | 2026-03-19 00:37:55.522918 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-19 00:37:55.522926 | orchestrator | Thursday 19 March 2026 00:37:49 +0000 (0:00:00.103) 0:00:01.777 ******** 2026-03-19 00:37:55.522935 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:37:55.522943 | orchestrator | 2026-03-19 00:37:55.522952 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-19 00:37:55.522960 | orchestrator | Thursday 19 March 2026 00:37:49 +0000 (0:00:00.101) 0:00:01.878 ******** 2026-03-19 00:37:55.522969 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:37:55.522977 | orchestrator | 2026-03-19 00:37:55.522986 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-19 00:37:55.522995 | orchestrator | Thursday 19 March 2026 00:37:50 +0000 (0:00:00.998) 0:00:02.877 ******** 2026-03-19 00:37:55.523004 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:37:55.523012 | orchestrator | 2026-03-19 00:37:55.523021 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-19 00:37:55.523030 | orchestrator | 2026-03-19 00:37:55.523038 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-19 00:37:55.523047 | orchestrator | Thursday 19 March 2026 00:37:50 +0000 (0:00:00.102) 0:00:02.979 ******** 2026-03-19 00:37:55.523056 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:37:55.523064 | orchestrator | 2026-03-19 00:37:55.523073 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-19 00:37:55.523081 | orchestrator | Thursday 19 March 2026 00:37:50 +0000 (0:00:00.094) 0:00:03.073 ******** 2026-03-19 00:37:55.523110 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:37:55.523125 | orchestrator | 2026-03-19 00:37:55.523137 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-19 00:37:55.523151 | orchestrator | Thursday 19 March 2026 00:37:51 +0000 (0:00:01.023) 0:00:04.097 ******** 2026-03-19 00:37:55.523164 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:37:55.523178 | orchestrator | 2026-03-19 00:37:55.523190 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-19 00:37:55.523204 | orchestrator | 2026-03-19 00:37:55.523217 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-19 00:37:55.523232 | orchestrator | Thursday 19 March 2026 00:37:51 +0000 (0:00:00.095) 0:00:04.192 ******** 2026-03-19 00:37:55.523247 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:37:55.523261 | orchestrator | 2026-03-19 00:37:55.523276 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-19 00:37:55.523291 | orchestrator | Thursday 19 March 2026 00:37:51 +0000 (0:00:00.096) 0:00:04.289 ******** 2026-03-19 00:37:55.523305 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:37:55.523316 | orchestrator | 2026-03-19 00:37:55.523324 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-19 00:37:55.523333 | orchestrator | Thursday 19 March 2026 00:37:52 +0000 (0:00:01.022) 0:00:05.312 ******** 2026-03-19 00:37:55.523357 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:37:55.523376 | orchestrator | 2026-03-19 00:37:55.523385 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-19 00:37:55.523394 | orchestrator | 2026-03-19 00:37:55.523403 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-19 00:37:55.523411 | orchestrator | Thursday 19 March 2026 00:37:52 +0000 (0:00:00.100) 0:00:05.412 ******** 2026-03-19 00:37:55.523420 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:37:55.523428 | orchestrator | 2026-03-19 00:37:55.523437 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-19 00:37:55.523446 | orchestrator | Thursday 19 March 2026 00:37:52 +0000 (0:00:00.096) 0:00:05.508 ******** 2026-03-19 00:37:55.523454 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:37:55.523463 | orchestrator | 2026-03-19 00:37:55.523471 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-19 00:37:55.523480 | orchestrator | Thursday 19 March 2026 00:37:53 +0000 (0:00:01.157) 0:00:06.666 ******** 2026-03-19 00:37:55.523489 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:37:55.523497 | orchestrator | 2026-03-19 00:37:55.523506 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-19 00:37:55.523515 | orchestrator | 2026-03-19 00:37:55.523523 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-19 00:37:55.523532 | orchestrator | Thursday 19 March 2026 00:37:54 +0000 (0:00:00.104) 0:00:06.770 ******** 2026-03-19 00:37:55.523540 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:37:55.523549 | orchestrator | 2026-03-19 00:37:55.523557 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-19 00:37:55.523566 | orchestrator | Thursday 19 March 2026 00:37:54 +0000 (0:00:00.098) 0:00:06.868 ******** 2026-03-19 00:37:55.523574 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:37:55.523679 | orchestrator | 2026-03-19 00:37:55.523693 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-19 00:37:55.523702 | orchestrator | Thursday 19 March 2026 00:37:55 +0000 (0:00:01.052) 0:00:07.921 ******** 2026-03-19 00:37:55.523728 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:37:55.523738 | orchestrator | 2026-03-19 00:37:55.523746 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:37:55.523758 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:37:55.523774 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:37:55.523790 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:37:55.523806 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:37:55.523821 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:37:55.523837 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:37:55.523846 | orchestrator | 2026-03-19 00:37:55.523855 | orchestrator | 2026-03-19 00:37:55.523864 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:37:55.523872 | orchestrator | Thursday 19 March 2026 00:37:55 +0000 (0:00:00.034) 0:00:07.956 ******** 2026-03-19 00:37:55.523881 | orchestrator | =============================================================================== 2026-03-19 00:37:55.523889 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.54s 2026-03-19 00:37:55.523898 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.63s 2026-03-19 00:37:55.523907 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.54s 2026-03-19 00:37:55.687746 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-19 00:38:07.091652 | orchestrator | 2026-03-19 00:38:07 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-19 00:38:07.159197 | orchestrator | 2026-03-19 00:38:07 | INFO  | Task 3819c5cd-2883-49d9-b604-bde993a7927c (wait-for-connection) was prepared for execution. 2026-03-19 00:38:07.159267 | orchestrator | 2026-03-19 00:38:07 | INFO  | It takes a moment until task 3819c5cd-2883-49d9-b604-bde993a7927c (wait-for-connection) has been started and output is visible here. 2026-03-19 00:38:21.909978 | orchestrator | 2026-03-19 00:38:21.910117 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-19 00:38:21.910126 | orchestrator | 2026-03-19 00:38:21.910131 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-19 00:38:21.910135 | orchestrator | Thursday 19 March 2026 00:38:10 +0000 (0:00:00.276) 0:00:00.276 ******** 2026-03-19 00:38:21.910139 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:38:21.910144 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:38:21.910148 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:38:21.910152 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:38:21.910156 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:38:21.910161 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:38:21.910164 | orchestrator | 2026-03-19 00:38:21.910168 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:38:21.910173 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:38:21.910179 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:38:21.910183 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:38:21.910187 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:38:21.910191 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:38:21.910194 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:38:21.910198 | orchestrator | 2026-03-19 00:38:21.910202 | orchestrator | 2026-03-19 00:38:21.910206 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:38:21.910210 | orchestrator | Thursday 19 March 2026 00:38:21 +0000 (0:00:11.515) 0:00:11.791 ******** 2026-03-19 00:38:21.910213 | orchestrator | =============================================================================== 2026-03-19 00:38:21.910217 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.52s 2026-03-19 00:38:22.084325 | orchestrator | + osism apply hddtemp 2026-03-19 00:38:33.399144 | orchestrator | 2026-03-19 00:38:33 | INFO  | Prepare task for execution of hddtemp. 2026-03-19 00:38:33.470598 | orchestrator | 2026-03-19 00:38:33 | INFO  | Task 27ed3e5d-5e01-4c8b-9081-b9fba243725a (hddtemp) was prepared for execution. 2026-03-19 00:38:33.470706 | orchestrator | 2026-03-19 00:38:33 | INFO  | It takes a moment until task 27ed3e5d-5e01-4c8b-9081-b9fba243725a (hddtemp) has been started and output is visible here. 2026-03-19 00:39:00.917896 | orchestrator | 2026-03-19 00:39:00.917998 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-19 00:39:00.918007 | orchestrator | 2026-03-19 00:39:00.918056 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-19 00:39:00.918065 | orchestrator | Thursday 19 March 2026 00:38:36 +0000 (0:00:00.297) 0:00:00.297 ******** 2026-03-19 00:39:00.918100 | orchestrator | ok: [testbed-manager] 2026-03-19 00:39:00.918108 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:39:00.918114 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:39:00.918119 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:39:00.918125 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:39:00.918131 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:39:00.918138 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:39:00.918144 | orchestrator | 2026-03-19 00:39:00.918150 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-19 00:39:00.918156 | orchestrator | Thursday 19 March 2026 00:38:37 +0000 (0:00:00.468) 0:00:00.765 ******** 2026-03-19 00:39:00.918164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:39:00.918171 | orchestrator | 2026-03-19 00:39:00.918177 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-19 00:39:00.918183 | orchestrator | Thursday 19 March 2026 00:38:38 +0000 (0:00:01.010) 0:00:01.776 ******** 2026-03-19 00:39:00.918189 | orchestrator | ok: [testbed-manager] 2026-03-19 00:39:00.918195 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:39:00.918201 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:39:00.918207 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:39:00.918213 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:39:00.918219 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:39:00.918225 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:39:00.918231 | orchestrator | 2026-03-19 00:39:00.918237 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-19 00:39:00.918243 | orchestrator | Thursday 19 March 2026 00:38:40 +0000 (0:00:02.292) 0:00:04.068 ******** 2026-03-19 00:39:00.918249 | orchestrator | changed: [testbed-manager] 2026-03-19 00:39:00.918256 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:39:00.918262 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:39:00.918268 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:39:00.918274 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:39:00.918280 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:39:00.918286 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:39:00.918292 | orchestrator | 2026-03-19 00:39:00.918313 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-19 00:39:00.918319 | orchestrator | Thursday 19 March 2026 00:38:41 +0000 (0:00:00.964) 0:00:05.032 ******** 2026-03-19 00:39:00.918325 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:39:00.918331 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:39:00.918337 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:39:00.918342 | orchestrator | ok: [testbed-manager] 2026-03-19 00:39:00.918348 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:39:00.918354 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:39:00.918360 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:39:00.918366 | orchestrator | 2026-03-19 00:39:00.918372 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-19 00:39:00.918378 | orchestrator | Thursday 19 March 2026 00:38:43 +0000 (0:00:02.180) 0:00:07.213 ******** 2026-03-19 00:39:00.918384 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:39:00.918390 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:39:00.918397 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:39:00.918402 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:39:00.918408 | orchestrator | changed: [testbed-manager] 2026-03-19 00:39:00.918413 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:39:00.918419 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:39:00.918425 | orchestrator | 2026-03-19 00:39:00.918431 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-19 00:39:00.918437 | orchestrator | Thursday 19 March 2026 00:38:44 +0000 (0:00:00.577) 0:00:07.791 ******** 2026-03-19 00:39:00.918443 | orchestrator | changed: [testbed-manager] 2026-03-19 00:39:00.918449 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:39:00.918460 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:39:00.918466 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:39:00.918472 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:39:00.918477 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:39:00.918483 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:39:00.918489 | orchestrator | 2026-03-19 00:39:00.918496 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-19 00:39:00.918502 | orchestrator | Thursday 19 March 2026 00:38:57 +0000 (0:00:13.586) 0:00:21.377 ******** 2026-03-19 00:39:00.918509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:39:00.918515 | orchestrator | 2026-03-19 00:39:00.918521 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-19 00:39:00.918527 | orchestrator | Thursday 19 March 2026 00:38:58 +0000 (0:00:01.091) 0:00:22.469 ******** 2026-03-19 00:39:00.918533 | orchestrator | changed: [testbed-manager] 2026-03-19 00:39:00.918555 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:39:00.918561 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:39:00.918567 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:39:00.918573 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:39:00.918578 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:39:00.918584 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:39:00.918589 | orchestrator | 2026-03-19 00:39:00.918595 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:39:00.918601 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:39:00.918628 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 00:39:00.918635 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 00:39:00.918641 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 00:39:00.918647 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 00:39:00.918652 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 00:39:00.918658 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 00:39:00.918664 | orchestrator | 2026-03-19 00:39:00.918670 | orchestrator | 2026-03-19 00:39:00.918675 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:39:00.918681 | orchestrator | Thursday 19 March 2026 00:39:00 +0000 (0:00:01.786) 0:00:24.255 ******** 2026-03-19 00:39:00.918687 | orchestrator | =============================================================================== 2026-03-19 00:39:00.918693 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.59s 2026-03-19 00:39:00.918699 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.29s 2026-03-19 00:39:00.918705 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.18s 2026-03-19 00:39:00.918711 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.79s 2026-03-19 00:39:00.918717 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.09s 2026-03-19 00:39:00.918722 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.01s 2026-03-19 00:39:00.918733 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.96s 2026-03-19 00:39:00.918744 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.58s 2026-03-19 00:39:00.918751 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.47s 2026-03-19 00:39:01.087629 | orchestrator | ++ semver latest 7.1.1 2026-03-19 00:39:01.130715 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-19 00:39:01.130806 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-19 00:39:01.130818 | orchestrator | + sudo systemctl restart manager.service 2026-03-19 00:39:14.389268 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-19 00:39:14.389381 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-19 00:39:14.389398 | orchestrator | + local max_attempts=60 2026-03-19 00:39:14.389411 | orchestrator | + local name=ceph-ansible 2026-03-19 00:39:14.389421 | orchestrator | + local attempt_num=1 2026-03-19 00:39:14.389433 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 00:39:14.427901 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-19 00:39:14.428001 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 00:39:14.428015 | orchestrator | + sleep 5 2026-03-19 00:39:19.431603 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 00:39:19.468850 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-19 00:39:19.468949 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 00:39:19.468964 | orchestrator | + sleep 5 2026-03-19 00:39:24.472150 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 00:39:24.509796 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-19 00:39:24.509893 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 00:39:24.509909 | orchestrator | + sleep 5 2026-03-19 00:39:29.514305 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 00:39:29.550068 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-19 00:39:29.550173 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 00:39:29.550189 | orchestrator | + sleep 5 2026-03-19 00:39:34.553009 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 00:39:34.589901 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-19 00:39:34.590106 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 00:39:34.590913 | orchestrator | + sleep 5 2026-03-19 00:39:39.594074 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 00:39:39.628614 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-19 00:39:39.628717 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 00:39:39.628730 | orchestrator | + sleep 5 2026-03-19 00:39:44.633651 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 00:39:44.675064 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-19 00:39:44.675170 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 00:39:44.675185 | orchestrator | + sleep 5 2026-03-19 00:39:49.679168 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 00:39:49.715771 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-19 00:39:49.715875 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 00:39:49.715891 | orchestrator | + sleep 5 2026-03-19 00:39:54.718852 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 00:39:54.762700 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-19 00:39:54.762836 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 00:39:54.762855 | orchestrator | + sleep 5 2026-03-19 00:39:59.767965 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 00:39:59.799673 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-19 00:39:59.799784 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 00:39:59.799798 | orchestrator | + sleep 5 2026-03-19 00:40:04.804622 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 00:40:04.839760 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-19 00:40:04.839845 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 00:40:04.839854 | orchestrator | + sleep 5 2026-03-19 00:40:09.843970 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 00:40:09.881228 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-19 00:40:09.881333 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 00:40:09.881386 | orchestrator | + sleep 5 2026-03-19 00:40:14.886405 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 00:40:14.918290 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-19 00:40:14.918449 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 00:40:14.918478 | orchestrator | + sleep 5 2026-03-19 00:40:19.922205 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 00:40:19.959935 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-19 00:40:19.960039 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-19 00:40:19.960055 | orchestrator | + local max_attempts=60 2026-03-19 00:40:19.960067 | orchestrator | + local name=kolla-ansible 2026-03-19 00:40:19.960078 | orchestrator | + local attempt_num=1 2026-03-19 00:40:19.961019 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-19 00:40:19.998008 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-19 00:40:19.998182 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-19 00:40:19.998206 | orchestrator | + local max_attempts=60 2026-03-19 00:40:19.999269 | orchestrator | + local name=osism-ansible 2026-03-19 00:40:19.999317 | orchestrator | + local attempt_num=1 2026-03-19 00:40:19.999335 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-19 00:40:20.033670 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-19 00:40:20.033794 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-19 00:40:20.033812 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-19 00:40:20.171471 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-19 00:40:20.291727 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-19 00:40:20.397670 | orchestrator | ARA in osism-ansible already disabled. 2026-03-19 00:40:20.540470 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-19 00:40:20.540609 | orchestrator | + osism apply gather-facts 2026-03-19 00:40:31.961447 | orchestrator | 2026-03-19 00:40:31 | INFO  | Prepare task for execution of gather-facts. 2026-03-19 00:40:32.035270 | orchestrator | 2026-03-19 00:40:32 | INFO  | Task dfb659f4-32cd-49f3-9758-0204651e737a (gather-facts) was prepared for execution. 2026-03-19 00:40:32.035371 | orchestrator | 2026-03-19 00:40:32 | INFO  | It takes a moment until task dfb659f4-32cd-49f3-9758-0204651e737a (gather-facts) has been started and output is visible here. 2026-03-19 00:40:35.609105 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-03-19 00:40:35.609212 | orchestrator | -vvvv to see details 2026-03-19 00:40:35.609228 | orchestrator | 2026-03-19 00:40:35.609241 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-19 00:40:35.609253 | orchestrator | 2026-03-19 00:40:35.609264 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-19 00:40:35.609278 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.5: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-19 00:40:35.609291 | orchestrator | ...ignoring 2026-03-19 00:40:35.609303 | orchestrator | fatal: [testbed-node-2]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-19 00:40:35.609314 | orchestrator | ...ignoring 2026-03-19 00:40:35.609349 | orchestrator | fatal: [testbed-node-4]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.14\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.14: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-19 00:40:35.609361 | orchestrator | ...ignoring 2026-03-19 00:40:35.609373 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-19 00:40:35.609411 | orchestrator | ...ignoring 2026-03-19 00:40:35.609424 | orchestrator | fatal: [testbed-node-1]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-19 00:40:35.609435 | orchestrator | ...ignoring 2026-03-19 00:40:35.609446 | orchestrator | fatal: [testbed-node-3]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.13\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.13: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-19 00:40:35.609457 | orchestrator | ...ignoring 2026-03-19 00:40:35.609468 | orchestrator | fatal: [testbed-node-5]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.15\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.15: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-19 00:40:35.609547 | orchestrator | ...ignoring 2026-03-19 00:40:35.609566 | orchestrator | 2026-03-19 00:40:35.609583 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-19 00:40:35.609595 | orchestrator | 2026-03-19 00:40:35.609606 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-19 00:40:35.609617 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:40:35.609629 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:40:35.609642 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:40:35.609654 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:40:35.609667 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:40:35.609679 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:40:35.609692 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:40:35.609703 | orchestrator | 2026-03-19 00:40:35.609714 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:40:35.609726 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-19 00:40:35.609738 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-19 00:40:35.609750 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-19 00:40:35.609761 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-19 00:40:35.609789 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-19 00:40:35.609801 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-19 00:40:35.609812 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-19 00:40:35.609823 | orchestrator | 2026-03-19 00:40:35.720058 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-19 00:40:35.740566 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-19 00:40:35.756359 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-19 00:40:35.775650 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-19 00:40:35.790283 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-19 00:40:35.809562 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-19 00:40:35.826398 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-19 00:40:35.839088 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-19 00:40:35.855926 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-19 00:40:35.872892 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-19 00:40:35.890339 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-19 00:40:35.903402 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-19 00:40:35.921435 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-19 00:40:35.938861 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-19 00:40:35.952073 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-19 00:40:35.968101 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-19 00:40:35.986007 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-19 00:40:35.997890 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-19 00:40:36.015012 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-19 00:40:36.030913 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-19 00:40:36.047399 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-19 00:40:36.062266 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-19 00:40:36.076159 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-19 00:40:36.095726 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-19 00:40:36.189230 | orchestrator | ok: Runtime: 0:23:03.877526 2026-03-19 00:40:36.294446 | 2026-03-19 00:40:36.294592 | TASK [Deploy services] 2026-03-19 00:40:36.829412 | orchestrator | skipping: Conditional result was False 2026-03-19 00:40:36.849721 | 2026-03-19 00:40:36.849905 | TASK [Deploy in a nutshell] 2026-03-19 00:40:37.569809 | orchestrator | + set -e 2026-03-19 00:40:37.569994 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 00:40:37.570089 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 00:40:37.570117 | orchestrator | ++ INTERACTIVE=false 2026-03-19 00:40:37.570130 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 00:40:37.570144 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 00:40:37.570157 | orchestrator | + source /opt/manager-vars.sh 2026-03-19 00:40:37.570201 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-19 00:40:37.570229 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-19 00:40:37.570244 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-19 00:40:37.570259 | orchestrator | ++ CEPH_VERSION=reef 2026-03-19 00:40:37.570285 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-19 00:40:37.570304 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-19 00:40:37.570315 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-19 00:40:37.570336 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-19 00:40:37.570347 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-19 00:40:37.570361 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-19 00:40:37.570372 | orchestrator | ++ export ARA=false 2026-03-19 00:40:37.570384 | orchestrator | ++ ARA=false 2026-03-19 00:40:37.570395 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-19 00:40:37.570407 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-19 00:40:37.570418 | orchestrator | ++ export TEMPEST=true 2026-03-19 00:40:37.570428 | orchestrator | ++ TEMPEST=true 2026-03-19 00:40:37.570439 | orchestrator | ++ export IS_ZUUL=true 2026-03-19 00:40:37.570450 | orchestrator | ++ IS_ZUUL=true 2026-03-19 00:40:37.570461 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.218 2026-03-19 00:40:37.570497 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.218 2026-03-19 00:40:37.570509 | orchestrator | ++ export EXTERNAL_API=false 2026-03-19 00:40:37.570520 | orchestrator | ++ EXTERNAL_API=false 2026-03-19 00:40:37.570531 | orchestrator | 2026-03-19 00:40:37.570542 | orchestrator | # PULL IMAGES 2026-03-19 00:40:37.570553 | orchestrator | 2026-03-19 00:40:37.570564 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-19 00:40:37.570576 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-19 00:40:37.570587 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-19 00:40:37.570598 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-19 00:40:37.570609 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-19 00:40:37.570628 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-19 00:40:37.570640 | orchestrator | + echo 2026-03-19 00:40:37.570651 | orchestrator | + echo '# PULL IMAGES' 2026-03-19 00:40:37.570662 | orchestrator | + echo 2026-03-19 00:40:37.571311 | orchestrator | ++ semver latest 7.0.0 2026-03-19 00:40:37.619059 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-19 00:40:37.619148 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-19 00:40:37.619166 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-19 00:40:38.722428 | orchestrator | 2026-03-19 00:40:38 | INFO  | Trying to run play pull-images in environment custom 2026-03-19 00:40:48.779020 | orchestrator | 2026-03-19 00:40:48 | INFO  | Prepare task for execution of pull-images. 2026-03-19 00:40:48.845000 | orchestrator | 2026-03-19 00:40:48 | INFO  | Task f3da07da-e435-4f6c-8ee1-79fd6270d409 (pull-images) was prepared for execution. 2026-03-19 00:40:48.845088 | orchestrator | 2026-03-19 00:40:48 | INFO  | Task f3da07da-e435-4f6c-8ee1-79fd6270d409 is running in background. No more output. Check ARA for logs. 2026-03-19 00:40:50.112788 | orchestrator | 2026-03-19 00:40:50 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-19 00:41:00.174614 | orchestrator | 2026-03-19 00:41:00 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-19 00:41:00.252224 | orchestrator | 2026-03-19 00:41:00 | INFO  | Task 53f91e6d-8451-4fa6-9172-db458904e16d (wipe-partitions) was prepared for execution. 2026-03-19 00:41:00.252358 | orchestrator | 2026-03-19 00:41:00 | INFO  | It takes a moment until task 53f91e6d-8451-4fa6-9172-db458904e16d (wipe-partitions) has been started and output is visible here. 2026-03-19 00:41:11.606487 | orchestrator | 2026-03-19 00:41:11.606615 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-19 00:41:11.606632 | orchestrator | 2026-03-19 00:41:11.606644 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-19 00:41:11.606663 | orchestrator | Thursday 19 March 2026 00:41:03 +0000 (0:00:00.150) 0:00:00.150 ******** 2026-03-19 00:41:11.606707 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:41:11.606721 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:41:11.606732 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:41:11.606743 | orchestrator | 2026-03-19 00:41:11.606754 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-19 00:41:11.606765 | orchestrator | Thursday 19 March 2026 00:41:04 +0000 (0:00:01.191) 0:00:01.341 ******** 2026-03-19 00:41:11.606779 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:11.606791 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:41:11.606803 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:41:11.606813 | orchestrator | 2026-03-19 00:41:11.606824 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-19 00:41:11.606835 | orchestrator | Thursday 19 March 2026 00:41:04 +0000 (0:00:00.215) 0:00:01.557 ******** 2026-03-19 00:41:11.606846 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:41:11.606858 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:41:11.606869 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:41:11.606879 | orchestrator | 2026-03-19 00:41:11.606890 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-19 00:41:11.606901 | orchestrator | Thursday 19 March 2026 00:41:05 +0000 (0:00:00.520) 0:00:02.077 ******** 2026-03-19 00:41:11.606912 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:11.606923 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:41:11.606933 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:41:11.606944 | orchestrator | 2026-03-19 00:41:11.606955 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-19 00:41:11.606966 | orchestrator | Thursday 19 March 2026 00:41:05 +0000 (0:00:00.217) 0:00:02.295 ******** 2026-03-19 00:41:11.606977 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-19 00:41:11.606995 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-19 00:41:11.607008 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-19 00:41:11.607021 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-19 00:41:11.607034 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-19 00:41:11.607046 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-19 00:41:11.607058 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-19 00:41:11.607068 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-19 00:41:11.607079 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-19 00:41:11.607090 | orchestrator | 2026-03-19 00:41:11.607101 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-19 00:41:11.607112 | orchestrator | Thursday 19 March 2026 00:41:06 +0000 (0:00:01.347) 0:00:03.642 ******** 2026-03-19 00:41:11.607124 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-19 00:41:11.607135 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-19 00:41:11.607145 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-19 00:41:11.607157 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-19 00:41:11.607176 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-19 00:41:11.607195 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-19 00:41:11.607213 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-19 00:41:11.607232 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-19 00:41:11.607252 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-19 00:41:11.607266 | orchestrator | 2026-03-19 00:41:11.607284 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-19 00:41:11.607295 | orchestrator | Thursday 19 March 2026 00:41:08 +0000 (0:00:01.387) 0:00:05.030 ******** 2026-03-19 00:41:11.607306 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-19 00:41:11.607317 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-19 00:41:11.607328 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-19 00:41:11.607339 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-19 00:41:11.607360 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-19 00:41:11.607371 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-19 00:41:11.607382 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-19 00:41:11.607392 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-19 00:41:11.607403 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-19 00:41:11.607414 | orchestrator | 2026-03-19 00:41:11.607425 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-19 00:41:11.607436 | orchestrator | Thursday 19 March 2026 00:41:10 +0000 (0:00:02.101) 0:00:07.132 ******** 2026-03-19 00:41:11.607606 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:41:11.607645 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:41:11.607656 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:41:11.607667 | orchestrator | 2026-03-19 00:41:11.607679 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-19 00:41:11.607690 | orchestrator | Thursday 19 March 2026 00:41:10 +0000 (0:00:00.602) 0:00:07.734 ******** 2026-03-19 00:41:11.607701 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:41:11.607711 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:41:11.607722 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:41:11.607735 | orchestrator | 2026-03-19 00:41:11.607746 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:41:11.607758 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:41:11.607771 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:41:11.607804 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:41:11.607816 | orchestrator | 2026-03-19 00:41:11.607827 | orchestrator | 2026-03-19 00:41:11.607838 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:41:11.607848 | orchestrator | Thursday 19 March 2026 00:41:11 +0000 (0:00:00.598) 0:00:08.332 ******** 2026-03-19 00:41:11.607859 | orchestrator | =============================================================================== 2026-03-19 00:41:11.607870 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.10s 2026-03-19 00:41:11.607880 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.39s 2026-03-19 00:41:11.607891 | orchestrator | Check device availability ----------------------------------------------- 1.35s 2026-03-19 00:41:11.607902 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.19s 2026-03-19 00:41:11.607913 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2026-03-19 00:41:11.607924 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2026-03-19 00:41:11.607934 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.52s 2026-03-19 00:41:11.607945 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.22s 2026-03-19 00:41:11.607956 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2026-03-19 00:41:22.899112 | orchestrator | 2026-03-19 00:41:22 | INFO  | Prepare task for execution of facts. 2026-03-19 00:41:22.973409 | orchestrator | 2026-03-19 00:41:22 | INFO  | Task 38ee3ec0-60b3-4977-8a80-4cbab0146d08 (facts) was prepared for execution. 2026-03-19 00:41:22.973556 | orchestrator | 2026-03-19 00:41:22 | INFO  | It takes a moment until task 38ee3ec0-60b3-4977-8a80-4cbab0146d08 (facts) has been started and output is visible here. 2026-03-19 00:41:34.490283 | orchestrator | 2026-03-19 00:41:34.490407 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-19 00:41:34.490424 | orchestrator | 2026-03-19 00:41:34.490538 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-19 00:41:34.490553 | orchestrator | Thursday 19 March 2026 00:41:26 +0000 (0:00:00.327) 0:00:00.327 ******** 2026-03-19 00:41:34.490564 | orchestrator | ok: [testbed-manager] 2026-03-19 00:41:34.490576 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:41:34.490587 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:41:34.490598 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:41:34.490608 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:41:34.490619 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:41:34.490629 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:41:34.490640 | orchestrator | 2026-03-19 00:41:34.490651 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-19 00:41:34.490662 | orchestrator | Thursday 19 March 2026 00:41:27 +0000 (0:00:01.340) 0:00:01.668 ******** 2026-03-19 00:41:34.490673 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:41:34.490685 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:41:34.490695 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:41:34.490706 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:41:34.490716 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:34.490727 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:41:34.490738 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:41:34.490748 | orchestrator | 2026-03-19 00:41:34.490759 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-19 00:41:34.490790 | orchestrator | 2026-03-19 00:41:34.490802 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-19 00:41:34.490814 | orchestrator | Thursday 19 March 2026 00:41:28 +0000 (0:00:01.184) 0:00:02.852 ******** 2026-03-19 00:41:34.490826 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:41:34.490840 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:41:34.490853 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:41:34.490865 | orchestrator | ok: [testbed-manager] 2026-03-19 00:41:34.490878 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:41:34.490896 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:41:34.490914 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:41:34.490933 | orchestrator | 2026-03-19 00:41:34.490951 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-19 00:41:34.490970 | orchestrator | 2026-03-19 00:41:34.490988 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-19 00:41:34.491007 | orchestrator | Thursday 19 March 2026 00:41:33 +0000 (0:00:05.268) 0:00:08.120 ******** 2026-03-19 00:41:34.491027 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:41:34.491047 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:41:34.491066 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:41:34.491084 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:41:34.491104 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:34.491123 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:41:34.491141 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:41:34.491159 | orchestrator | 2026-03-19 00:41:34.491171 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:41:34.491183 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:41:34.491196 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:41:34.491207 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:41:34.491218 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:41:34.491228 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:41:34.491252 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:41:34.491263 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:41:34.491274 | orchestrator | 2026-03-19 00:41:34.491285 | orchestrator | 2026-03-19 00:41:34.491296 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:41:34.491307 | orchestrator | Thursday 19 March 2026 00:41:34 +0000 (0:00:00.466) 0:00:08.586 ******** 2026-03-19 00:41:34.491318 | orchestrator | =============================================================================== 2026-03-19 00:41:34.491328 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.27s 2026-03-19 00:41:34.491339 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.34s 2026-03-19 00:41:34.491350 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.18s 2026-03-19 00:41:34.491361 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2026-03-19 00:41:35.759010 | orchestrator | 2026-03-19 00:41:35 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-19 00:41:35.814483 | orchestrator | 2026-03-19 00:41:35 | INFO  | Task f109d702-201f-4ef4-b60e-72b927d42f76 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-19 00:41:35.814598 | orchestrator | 2026-03-19 00:41:35 | INFO  | It takes a moment until task f109d702-201f-4ef4-b60e-72b927d42f76 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-19 00:41:47.299032 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-19 00:41:47.299144 | orchestrator | 2.16.14 2026-03-19 00:41:47.299165 | orchestrator | 2026-03-19 00:41:47.299178 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-19 00:41:47.299192 | orchestrator | 2026-03-19 00:41:47.299203 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-19 00:41:47.299215 | orchestrator | Thursday 19 March 2026 00:41:40 +0000 (0:00:00.282) 0:00:00.282 ******** 2026-03-19 00:41:47.299228 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-19 00:41:47.299241 | orchestrator | 2026-03-19 00:41:47.299254 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-19 00:41:47.299267 | orchestrator | Thursday 19 March 2026 00:41:40 +0000 (0:00:00.235) 0:00:00.517 ******** 2026-03-19 00:41:47.299280 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:41:47.299290 | orchestrator | 2026-03-19 00:41:47.299297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:41:47.299305 | orchestrator | Thursday 19 March 2026 00:41:40 +0000 (0:00:00.254) 0:00:00.772 ******** 2026-03-19 00:41:47.299321 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-19 00:41:47.299329 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-19 00:41:47.299336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-19 00:41:47.299344 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-19 00:41:47.299351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-19 00:41:47.299358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-19 00:41:47.299365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-19 00:41:47.299372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-19 00:41:47.299379 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-19 00:41:47.299386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-19 00:41:47.299416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-19 00:41:47.299491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-19 00:41:47.299502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-19 00:41:47.299509 | orchestrator | 2026-03-19 00:41:47.299517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:41:47.299524 | orchestrator | Thursday 19 March 2026 00:41:41 +0000 (0:00:00.368) 0:00:01.140 ******** 2026-03-19 00:41:47.299531 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:47.299538 | orchestrator | 2026-03-19 00:41:47.299545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:41:47.299553 | orchestrator | Thursday 19 March 2026 00:41:41 +0000 (0:00:00.464) 0:00:01.604 ******** 2026-03-19 00:41:47.299560 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:47.299567 | orchestrator | 2026-03-19 00:41:47.299575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:41:47.299587 | orchestrator | Thursday 19 March 2026 00:41:41 +0000 (0:00:00.191) 0:00:01.795 ******** 2026-03-19 00:41:47.299596 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:47.299604 | orchestrator | 2026-03-19 00:41:47.299612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:41:47.299629 | orchestrator | Thursday 19 March 2026 00:41:41 +0000 (0:00:00.182) 0:00:01.977 ******** 2026-03-19 00:41:47.299638 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:47.299646 | orchestrator | 2026-03-19 00:41:47.299662 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:41:47.299670 | orchestrator | Thursday 19 March 2026 00:41:42 +0000 (0:00:00.190) 0:00:02.168 ******** 2026-03-19 00:41:47.299679 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:47.299687 | orchestrator | 2026-03-19 00:41:47.299695 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:41:47.299703 | orchestrator | Thursday 19 March 2026 00:41:42 +0000 (0:00:00.186) 0:00:02.354 ******** 2026-03-19 00:41:47.299711 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:47.299719 | orchestrator | 2026-03-19 00:41:47.299728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:41:47.299736 | orchestrator | Thursday 19 March 2026 00:41:42 +0000 (0:00:00.175) 0:00:02.530 ******** 2026-03-19 00:41:47.299744 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:47.299752 | orchestrator | 2026-03-19 00:41:47.299759 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:41:47.299766 | orchestrator | Thursday 19 March 2026 00:41:42 +0000 (0:00:00.196) 0:00:02.726 ******** 2026-03-19 00:41:47.299773 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:47.299780 | orchestrator | 2026-03-19 00:41:47.299787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:41:47.299795 | orchestrator | Thursday 19 March 2026 00:41:42 +0000 (0:00:00.192) 0:00:02.918 ******** 2026-03-19 00:41:47.299802 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d) 2026-03-19 00:41:47.299810 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d) 2026-03-19 00:41:47.299817 | orchestrator | 2026-03-19 00:41:47.299824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:41:47.299848 | orchestrator | Thursday 19 March 2026 00:41:43 +0000 (0:00:00.431) 0:00:03.350 ******** 2026-03-19 00:41:47.299856 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cc7e233c-8cac-4df4-a011-c93cbddae1f1) 2026-03-19 00:41:47.299863 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cc7e233c-8cac-4df4-a011-c93cbddae1f1) 2026-03-19 00:41:47.299871 | orchestrator | 2026-03-19 00:41:47.299883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:41:47.299899 | orchestrator | Thursday 19 March 2026 00:41:43 +0000 (0:00:00.404) 0:00:03.754 ******** 2026-03-19 00:41:47.299907 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_184b1ce1-ca15-4595-9678-ac68bfb03600) 2026-03-19 00:41:47.299914 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_184b1ce1-ca15-4595-9678-ac68bfb03600) 2026-03-19 00:41:47.299921 | orchestrator | 2026-03-19 00:41:47.299928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:41:47.299935 | orchestrator | Thursday 19 March 2026 00:41:44 +0000 (0:00:00.577) 0:00:04.331 ******** 2026-03-19 00:41:47.299943 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d5727df1-f3c7-4916-bc14-eaaddd40c7b3) 2026-03-19 00:41:47.299950 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d5727df1-f3c7-4916-bc14-eaaddd40c7b3) 2026-03-19 00:41:47.299957 | orchestrator | 2026-03-19 00:41:47.299964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:41:47.299971 | orchestrator | Thursday 19 March 2026 00:41:44 +0000 (0:00:00.604) 0:00:04.935 ******** 2026-03-19 00:41:47.299978 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-19 00:41:47.299986 | orchestrator | 2026-03-19 00:41:47.299993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:41:47.300000 | orchestrator | Thursday 19 March 2026 00:41:45 +0000 (0:00:00.717) 0:00:05.653 ******** 2026-03-19 00:41:47.300007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-19 00:41:47.300014 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-19 00:41:47.300021 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-19 00:41:47.300028 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-19 00:41:47.300036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-19 00:41:47.300043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-19 00:41:47.300050 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-19 00:41:47.300057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-19 00:41:47.300064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-19 00:41:47.300071 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-19 00:41:47.300079 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-19 00:41:47.300086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-19 00:41:47.300093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-19 00:41:47.300100 | orchestrator | 2026-03-19 00:41:47.300107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:41:47.300114 | orchestrator | Thursday 19 March 2026 00:41:45 +0000 (0:00:00.349) 0:00:06.002 ******** 2026-03-19 00:41:47.300121 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:47.300129 | orchestrator | 2026-03-19 00:41:47.300136 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:41:47.300143 | orchestrator | Thursday 19 March 2026 00:41:46 +0000 (0:00:00.191) 0:00:06.194 ******** 2026-03-19 00:41:47.300150 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:47.300157 | orchestrator | 2026-03-19 00:41:47.300165 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:41:47.300179 | orchestrator | Thursday 19 March 2026 00:41:46 +0000 (0:00:00.171) 0:00:06.365 ******** 2026-03-19 00:41:47.300191 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:47.300211 | orchestrator | 2026-03-19 00:41:47.300225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:41:47.300239 | orchestrator | Thursday 19 March 2026 00:41:46 +0000 (0:00:00.178) 0:00:06.544 ******** 2026-03-19 00:41:47.300251 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:47.300262 | orchestrator | 2026-03-19 00:41:47.300270 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:41:47.300277 | orchestrator | Thursday 19 March 2026 00:41:46 +0000 (0:00:00.185) 0:00:06.730 ******** 2026-03-19 00:41:47.300284 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:47.300291 | orchestrator | 2026-03-19 00:41:47.300298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:41:47.300305 | orchestrator | Thursday 19 March 2026 00:41:46 +0000 (0:00:00.179) 0:00:06.909 ******** 2026-03-19 00:41:47.300312 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:47.300319 | orchestrator | 2026-03-19 00:41:47.300326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:41:47.300333 | orchestrator | Thursday 19 March 2026 00:41:47 +0000 (0:00:00.186) 0:00:07.096 ******** 2026-03-19 00:41:47.300341 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:47.300348 | orchestrator | 2026-03-19 00:41:47.300359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:41:54.091288 | orchestrator | Thursday 19 March 2026 00:41:47 +0000 (0:00:00.222) 0:00:07.318 ******** 2026-03-19 00:41:54.091405 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.091456 | orchestrator | 2026-03-19 00:41:54.091468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:41:54.091477 | orchestrator | Thursday 19 March 2026 00:41:47 +0000 (0:00:00.164) 0:00:07.483 ******** 2026-03-19 00:41:54.091486 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-19 00:41:54.091495 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-19 00:41:54.091503 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-19 00:41:54.091511 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-19 00:41:54.091519 | orchestrator | 2026-03-19 00:41:54.091528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:41:54.091555 | orchestrator | Thursday 19 March 2026 00:41:48 +0000 (0:00:00.792) 0:00:08.276 ******** 2026-03-19 00:41:54.091563 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.091571 | orchestrator | 2026-03-19 00:41:54.091579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:41:54.091587 | orchestrator | Thursday 19 March 2026 00:41:48 +0000 (0:00:00.166) 0:00:08.443 ******** 2026-03-19 00:41:54.091595 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.091603 | orchestrator | 2026-03-19 00:41:54.091611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:41:54.091619 | orchestrator | Thursday 19 March 2026 00:41:48 +0000 (0:00:00.167) 0:00:08.611 ******** 2026-03-19 00:41:54.091627 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.091634 | orchestrator | 2026-03-19 00:41:54.091642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:41:54.091650 | orchestrator | Thursday 19 March 2026 00:41:48 +0000 (0:00:00.183) 0:00:08.794 ******** 2026-03-19 00:41:54.091658 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.091666 | orchestrator | 2026-03-19 00:41:54.091674 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-19 00:41:54.091682 | orchestrator | Thursday 19 March 2026 00:41:48 +0000 (0:00:00.177) 0:00:08.971 ******** 2026-03-19 00:41:54.091690 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-19 00:41:54.091698 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-19 00:41:54.091706 | orchestrator | 2026-03-19 00:41:54.091714 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-19 00:41:54.091722 | orchestrator | Thursday 19 March 2026 00:41:49 +0000 (0:00:00.145) 0:00:09.117 ******** 2026-03-19 00:41:54.091756 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.091764 | orchestrator | 2026-03-19 00:41:54.091772 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-19 00:41:54.091780 | orchestrator | Thursday 19 March 2026 00:41:49 +0000 (0:00:00.124) 0:00:09.241 ******** 2026-03-19 00:41:54.091788 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.091795 | orchestrator | 2026-03-19 00:41:54.091803 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-19 00:41:54.091811 | orchestrator | Thursday 19 March 2026 00:41:49 +0000 (0:00:00.122) 0:00:09.364 ******** 2026-03-19 00:41:54.091819 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.091827 | orchestrator | 2026-03-19 00:41:54.091836 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-19 00:41:54.091845 | orchestrator | Thursday 19 March 2026 00:41:49 +0000 (0:00:00.120) 0:00:09.484 ******** 2026-03-19 00:41:54.091854 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:41:54.091864 | orchestrator | 2026-03-19 00:41:54.091873 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-19 00:41:54.091882 | orchestrator | Thursday 19 March 2026 00:41:49 +0000 (0:00:00.122) 0:00:09.606 ******** 2026-03-19 00:41:54.091892 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'}}) 2026-03-19 00:41:54.091902 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd672a78a-4132-5655-a0fe-bae0f8eb714c'}}) 2026-03-19 00:41:54.091910 | orchestrator | 2026-03-19 00:41:54.091920 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-19 00:41:54.091929 | orchestrator | Thursday 19 March 2026 00:41:49 +0000 (0:00:00.140) 0:00:09.747 ******** 2026-03-19 00:41:54.091939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'}})  2026-03-19 00:41:54.091954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd672a78a-4132-5655-a0fe-bae0f8eb714c'}})  2026-03-19 00:41:54.091967 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.091977 | orchestrator | 2026-03-19 00:41:54.091986 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-19 00:41:54.091994 | orchestrator | Thursday 19 March 2026 00:41:49 +0000 (0:00:00.128) 0:00:09.875 ******** 2026-03-19 00:41:54.092003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'}})  2026-03-19 00:41:54.092013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd672a78a-4132-5655-a0fe-bae0f8eb714c'}})  2026-03-19 00:41:54.092022 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.092031 | orchestrator | 2026-03-19 00:41:54.092040 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-19 00:41:54.092050 | orchestrator | Thursday 19 March 2026 00:41:50 +0000 (0:00:00.254) 0:00:10.130 ******** 2026-03-19 00:41:54.092059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'}})  2026-03-19 00:41:54.092083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd672a78a-4132-5655-a0fe-bae0f8eb714c'}})  2026-03-19 00:41:54.092092 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.092100 | orchestrator | 2026-03-19 00:41:54.092108 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-19 00:41:54.092115 | orchestrator | Thursday 19 March 2026 00:41:50 +0000 (0:00:00.136) 0:00:10.267 ******** 2026-03-19 00:41:54.092123 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:41:54.092131 | orchestrator | 2026-03-19 00:41:54.092139 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-19 00:41:54.092147 | orchestrator | Thursday 19 March 2026 00:41:50 +0000 (0:00:00.124) 0:00:10.391 ******** 2026-03-19 00:41:54.092155 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:41:54.092169 | orchestrator | 2026-03-19 00:41:54.092177 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-19 00:41:54.092184 | orchestrator | Thursday 19 March 2026 00:41:50 +0000 (0:00:00.129) 0:00:10.520 ******** 2026-03-19 00:41:54.092192 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.092200 | orchestrator | 2026-03-19 00:41:54.092208 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-19 00:41:54.092216 | orchestrator | Thursday 19 March 2026 00:41:50 +0000 (0:00:00.120) 0:00:10.641 ******** 2026-03-19 00:41:54.092224 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.092232 | orchestrator | 2026-03-19 00:41:54.092240 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-19 00:41:54.092248 | orchestrator | Thursday 19 March 2026 00:41:50 +0000 (0:00:00.122) 0:00:10.764 ******** 2026-03-19 00:41:54.092255 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.092263 | orchestrator | 2026-03-19 00:41:54.092271 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-19 00:41:54.092279 | orchestrator | Thursday 19 March 2026 00:41:50 +0000 (0:00:00.116) 0:00:10.880 ******** 2026-03-19 00:41:54.092287 | orchestrator | ok: [testbed-node-3] => { 2026-03-19 00:41:54.092295 | orchestrator |  "ceph_osd_devices": { 2026-03-19 00:41:54.092303 | orchestrator |  "sdb": { 2026-03-19 00:41:54.092311 | orchestrator |  "osd_lvm_uuid": "24d614e2-ec6e-5ed2-9057-307e4a3cb0c0" 2026-03-19 00:41:54.092319 | orchestrator |  }, 2026-03-19 00:41:54.092327 | orchestrator |  "sdc": { 2026-03-19 00:41:54.092334 | orchestrator |  "osd_lvm_uuid": "d672a78a-4132-5655-a0fe-bae0f8eb714c" 2026-03-19 00:41:54.092342 | orchestrator |  } 2026-03-19 00:41:54.092350 | orchestrator |  } 2026-03-19 00:41:54.092358 | orchestrator | } 2026-03-19 00:41:54.092366 | orchestrator | 2026-03-19 00:41:54.092374 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-19 00:41:54.092382 | orchestrator | Thursday 19 March 2026 00:41:50 +0000 (0:00:00.122) 0:00:11.002 ******** 2026-03-19 00:41:54.092390 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.092398 | orchestrator | 2026-03-19 00:41:54.092405 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-19 00:41:54.092413 | orchestrator | Thursday 19 March 2026 00:41:51 +0000 (0:00:00.120) 0:00:11.123 ******** 2026-03-19 00:41:54.092435 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.092444 | orchestrator | 2026-03-19 00:41:54.092452 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-19 00:41:54.092460 | orchestrator | Thursday 19 March 2026 00:41:51 +0000 (0:00:00.112) 0:00:11.236 ******** 2026-03-19 00:41:54.092467 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:41:54.092475 | orchestrator | 2026-03-19 00:41:54.092483 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-19 00:41:54.092491 | orchestrator | Thursday 19 March 2026 00:41:51 +0000 (0:00:00.111) 0:00:11.347 ******** 2026-03-19 00:41:54.092499 | orchestrator | changed: [testbed-node-3] => { 2026-03-19 00:41:54.092506 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-19 00:41:54.092514 | orchestrator |  "ceph_osd_devices": { 2026-03-19 00:41:54.092522 | orchestrator |  "sdb": { 2026-03-19 00:41:54.092530 | orchestrator |  "osd_lvm_uuid": "24d614e2-ec6e-5ed2-9057-307e4a3cb0c0" 2026-03-19 00:41:54.092538 | orchestrator |  }, 2026-03-19 00:41:54.092546 | orchestrator |  "sdc": { 2026-03-19 00:41:54.092553 | orchestrator |  "osd_lvm_uuid": "d672a78a-4132-5655-a0fe-bae0f8eb714c" 2026-03-19 00:41:54.092561 | orchestrator |  } 2026-03-19 00:41:54.092569 | orchestrator |  }, 2026-03-19 00:41:54.092577 | orchestrator |  "lvm_volumes": [ 2026-03-19 00:41:54.092585 | orchestrator |  { 2026-03-19 00:41:54.092592 | orchestrator |  "data": "osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0", 2026-03-19 00:41:54.092600 | orchestrator |  "data_vg": "ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0" 2026-03-19 00:41:54.092613 | orchestrator |  }, 2026-03-19 00:41:54.092621 | orchestrator |  { 2026-03-19 00:41:54.092629 | orchestrator |  "data": "osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c", 2026-03-19 00:41:54.092637 | orchestrator |  "data_vg": "ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c" 2026-03-19 00:41:54.092644 | orchestrator |  } 2026-03-19 00:41:54.092652 | orchestrator |  ] 2026-03-19 00:41:54.092660 | orchestrator |  } 2026-03-19 00:41:54.092668 | orchestrator | } 2026-03-19 00:41:54.092676 | orchestrator | 2026-03-19 00:41:54.092684 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-19 00:41:54.092691 | orchestrator | Thursday 19 March 2026 00:41:51 +0000 (0:00:00.180) 0:00:11.528 ******** 2026-03-19 00:41:54.092699 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-19 00:41:54.092707 | orchestrator | 2026-03-19 00:41:54.092715 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-19 00:41:54.092723 | orchestrator | 2026-03-19 00:41:54.092730 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-19 00:41:54.092738 | orchestrator | Thursday 19 March 2026 00:41:53 +0000 (0:00:02.109) 0:00:13.637 ******** 2026-03-19 00:41:54.092746 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-19 00:41:54.092754 | orchestrator | 2026-03-19 00:41:54.092761 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-19 00:41:54.092769 | orchestrator | Thursday 19 March 2026 00:41:53 +0000 (0:00:00.244) 0:00:13.882 ******** 2026-03-19 00:41:54.092777 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:41:54.092785 | orchestrator | 2026-03-19 00:41:54.092798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:01.320715 | orchestrator | Thursday 19 March 2026 00:41:54 +0000 (0:00:00.230) 0:00:14.113 ******** 2026-03-19 00:42:01.320841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-19 00:42:01.320857 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-19 00:42:01.320869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-19 00:42:01.320880 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-19 00:42:01.320891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-19 00:42:01.320902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-19 00:42:01.320913 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-19 00:42:01.320929 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-19 00:42:01.320941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-19 00:42:01.320952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-19 00:42:01.320963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-19 00:42:01.320975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-19 00:42:01.321007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-19 00:42:01.321018 | orchestrator | 2026-03-19 00:42:01.321030 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:01.321041 | orchestrator | Thursday 19 March 2026 00:41:54 +0000 (0:00:00.408) 0:00:14.521 ******** 2026-03-19 00:42:01.321052 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.321064 | orchestrator | 2026-03-19 00:42:01.321075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:01.321086 | orchestrator | Thursday 19 March 2026 00:41:54 +0000 (0:00:00.209) 0:00:14.730 ******** 2026-03-19 00:42:01.321124 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.321136 | orchestrator | 2026-03-19 00:42:01.321147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:01.321158 | orchestrator | Thursday 19 March 2026 00:41:54 +0000 (0:00:00.185) 0:00:14.916 ******** 2026-03-19 00:42:01.321168 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.321179 | orchestrator | 2026-03-19 00:42:01.321190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:01.321201 | orchestrator | Thursday 19 March 2026 00:41:55 +0000 (0:00:00.230) 0:00:15.147 ******** 2026-03-19 00:42:01.321212 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.321223 | orchestrator | 2026-03-19 00:42:01.321233 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:01.321247 | orchestrator | Thursday 19 March 2026 00:41:55 +0000 (0:00:00.182) 0:00:15.330 ******** 2026-03-19 00:42:01.321260 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.321272 | orchestrator | 2026-03-19 00:42:01.321284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:01.321297 | orchestrator | Thursday 19 March 2026 00:41:55 +0000 (0:00:00.603) 0:00:15.934 ******** 2026-03-19 00:42:01.321310 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.321322 | orchestrator | 2026-03-19 00:42:01.321334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:01.321347 | orchestrator | Thursday 19 March 2026 00:41:56 +0000 (0:00:00.180) 0:00:16.114 ******** 2026-03-19 00:42:01.321360 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.321372 | orchestrator | 2026-03-19 00:42:01.321384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:01.321397 | orchestrator | Thursday 19 March 2026 00:41:56 +0000 (0:00:00.205) 0:00:16.320 ******** 2026-03-19 00:42:01.321409 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.321461 | orchestrator | 2026-03-19 00:42:01.321481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:01.321503 | orchestrator | Thursday 19 March 2026 00:41:56 +0000 (0:00:00.221) 0:00:16.542 ******** 2026-03-19 00:42:01.321523 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b) 2026-03-19 00:42:01.321541 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b) 2026-03-19 00:42:01.321554 | orchestrator | 2026-03-19 00:42:01.321567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:01.321581 | orchestrator | Thursday 19 March 2026 00:41:56 +0000 (0:00:00.403) 0:00:16.945 ******** 2026-03-19 00:42:01.321594 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4a32226a-f2aa-4275-90ab-4bb1f7d2ca8f) 2026-03-19 00:42:01.321605 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4a32226a-f2aa-4275-90ab-4bb1f7d2ca8f) 2026-03-19 00:42:01.321616 | orchestrator | 2026-03-19 00:42:01.321627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:01.321638 | orchestrator | Thursday 19 March 2026 00:41:57 +0000 (0:00:00.406) 0:00:17.351 ******** 2026-03-19 00:42:01.321649 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_29b2bcaa-94cd-4bc7-8ad5-d8dbf0337361) 2026-03-19 00:42:01.321660 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_29b2bcaa-94cd-4bc7-8ad5-d8dbf0337361) 2026-03-19 00:42:01.321671 | orchestrator | 2026-03-19 00:42:01.321682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:01.321711 | orchestrator | Thursday 19 March 2026 00:41:57 +0000 (0:00:00.422) 0:00:17.774 ******** 2026-03-19 00:42:01.321723 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c44a5407-fe6a-4dde-8ed4-2e9072e3ed0d) 2026-03-19 00:42:01.321734 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c44a5407-fe6a-4dde-8ed4-2e9072e3ed0d) 2026-03-19 00:42:01.321745 | orchestrator | 2026-03-19 00:42:01.321765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:01.321776 | orchestrator | Thursday 19 March 2026 00:41:58 +0000 (0:00:00.386) 0:00:18.160 ******** 2026-03-19 00:42:01.321787 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-19 00:42:01.321797 | orchestrator | 2026-03-19 00:42:01.321808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:01.321819 | orchestrator | Thursday 19 March 2026 00:41:58 +0000 (0:00:00.328) 0:00:18.489 ******** 2026-03-19 00:42:01.321829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-19 00:42:01.321840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-19 00:42:01.321857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-19 00:42:01.321868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-19 00:42:01.321879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-19 00:42:01.321890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-19 00:42:01.321901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-19 00:42:01.321912 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-19 00:42:01.321922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-19 00:42:01.321933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-19 00:42:01.321944 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-19 00:42:01.321955 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-19 00:42:01.321965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-19 00:42:01.321976 | orchestrator | 2026-03-19 00:42:01.321987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:01.321998 | orchestrator | Thursday 19 March 2026 00:41:58 +0000 (0:00:00.357) 0:00:18.847 ******** 2026-03-19 00:42:01.322009 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.322081 | orchestrator | 2026-03-19 00:42:01.322095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:01.322106 | orchestrator | Thursday 19 March 2026 00:41:59 +0000 (0:00:00.193) 0:00:19.040 ******** 2026-03-19 00:42:01.322117 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.322128 | orchestrator | 2026-03-19 00:42:01.322139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:01.322150 | orchestrator | Thursday 19 March 2026 00:41:59 +0000 (0:00:00.496) 0:00:19.537 ******** 2026-03-19 00:42:01.322161 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.322172 | orchestrator | 2026-03-19 00:42:01.322182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:01.322193 | orchestrator | Thursday 19 March 2026 00:41:59 +0000 (0:00:00.197) 0:00:19.735 ******** 2026-03-19 00:42:01.322204 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.322215 | orchestrator | 2026-03-19 00:42:01.322225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:01.322236 | orchestrator | Thursday 19 March 2026 00:41:59 +0000 (0:00:00.190) 0:00:19.925 ******** 2026-03-19 00:42:01.322247 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.322258 | orchestrator | 2026-03-19 00:42:01.322269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:01.322279 | orchestrator | Thursday 19 March 2026 00:42:00 +0000 (0:00:00.191) 0:00:20.116 ******** 2026-03-19 00:42:01.322290 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.322308 | orchestrator | 2026-03-19 00:42:01.322319 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:01.322330 | orchestrator | Thursday 19 March 2026 00:42:00 +0000 (0:00:00.173) 0:00:20.290 ******** 2026-03-19 00:42:01.322341 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.322352 | orchestrator | 2026-03-19 00:42:01.322363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:01.322373 | orchestrator | Thursday 19 March 2026 00:42:00 +0000 (0:00:00.186) 0:00:20.476 ******** 2026-03-19 00:42:01.322384 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:01.322395 | orchestrator | 2026-03-19 00:42:01.322406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:01.322439 | orchestrator | Thursday 19 March 2026 00:42:00 +0000 (0:00:00.173) 0:00:20.650 ******** 2026-03-19 00:42:01.322452 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-19 00:42:01.322464 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-19 00:42:01.322476 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-19 00:42:01.322487 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-19 00:42:01.322497 | orchestrator | 2026-03-19 00:42:01.322508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:01.322519 | orchestrator | Thursday 19 March 2026 00:42:01 +0000 (0:00:00.573) 0:00:21.223 ******** 2026-03-19 00:42:01.322530 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.053116 | orchestrator | 2026-03-19 00:42:07.053236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:07.053252 | orchestrator | Thursday 19 March 2026 00:42:01 +0000 (0:00:00.184) 0:00:21.408 ******** 2026-03-19 00:42:07.053265 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.053277 | orchestrator | 2026-03-19 00:42:07.053288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:07.053300 | orchestrator | Thursday 19 March 2026 00:42:01 +0000 (0:00:00.189) 0:00:21.597 ******** 2026-03-19 00:42:07.053311 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.053322 | orchestrator | 2026-03-19 00:42:07.053333 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:07.053344 | orchestrator | Thursday 19 March 2026 00:42:01 +0000 (0:00:00.171) 0:00:21.768 ******** 2026-03-19 00:42:07.053355 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.053366 | orchestrator | 2026-03-19 00:42:07.053377 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-19 00:42:07.053388 | orchestrator | Thursday 19 March 2026 00:42:01 +0000 (0:00:00.173) 0:00:21.942 ******** 2026-03-19 00:42:07.053399 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-19 00:42:07.053410 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-19 00:42:07.053483 | orchestrator | 2026-03-19 00:42:07.053495 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-19 00:42:07.053528 | orchestrator | Thursday 19 March 2026 00:42:02 +0000 (0:00:00.280) 0:00:22.223 ******** 2026-03-19 00:42:07.053546 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.053564 | orchestrator | 2026-03-19 00:42:07.053583 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-19 00:42:07.053611 | orchestrator | Thursday 19 March 2026 00:42:02 +0000 (0:00:00.124) 0:00:22.348 ******** 2026-03-19 00:42:07.053632 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.053651 | orchestrator | 2026-03-19 00:42:07.053669 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-19 00:42:07.053696 | orchestrator | Thursday 19 March 2026 00:42:02 +0000 (0:00:00.123) 0:00:22.472 ******** 2026-03-19 00:42:07.053716 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.053735 | orchestrator | 2026-03-19 00:42:07.053753 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-19 00:42:07.053772 | orchestrator | Thursday 19 March 2026 00:42:02 +0000 (0:00:00.124) 0:00:22.597 ******** 2026-03-19 00:42:07.053826 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:42:07.053847 | orchestrator | 2026-03-19 00:42:07.053867 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-19 00:42:07.053887 | orchestrator | Thursday 19 March 2026 00:42:02 +0000 (0:00:00.123) 0:00:22.720 ******** 2026-03-19 00:42:07.053902 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c9339aa0-dcb3-5462-b16c-1d446efe678c'}}) 2026-03-19 00:42:07.053916 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0813f2fe-0b5e-5f32-866c-c0f68041cbc1'}}) 2026-03-19 00:42:07.053929 | orchestrator | 2026-03-19 00:42:07.053942 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-19 00:42:07.053981 | orchestrator | Thursday 19 March 2026 00:42:02 +0000 (0:00:00.155) 0:00:22.876 ******** 2026-03-19 00:42:07.053995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c9339aa0-dcb3-5462-b16c-1d446efe678c'}})  2026-03-19 00:42:07.054011 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0813f2fe-0b5e-5f32-866c-c0f68041cbc1'}})  2026-03-19 00:42:07.054080 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.054092 | orchestrator | 2026-03-19 00:42:07.054103 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-19 00:42:07.054114 | orchestrator | Thursday 19 March 2026 00:42:02 +0000 (0:00:00.132) 0:00:23.008 ******** 2026-03-19 00:42:07.054124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c9339aa0-dcb3-5462-b16c-1d446efe678c'}})  2026-03-19 00:42:07.054135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0813f2fe-0b5e-5f32-866c-c0f68041cbc1'}})  2026-03-19 00:42:07.054147 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.054158 | orchestrator | 2026-03-19 00:42:07.054169 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-19 00:42:07.054180 | orchestrator | Thursday 19 March 2026 00:42:03 +0000 (0:00:00.137) 0:00:23.146 ******** 2026-03-19 00:42:07.054191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c9339aa0-dcb3-5462-b16c-1d446efe678c'}})  2026-03-19 00:42:07.054202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0813f2fe-0b5e-5f32-866c-c0f68041cbc1'}})  2026-03-19 00:42:07.054213 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.054224 | orchestrator | 2026-03-19 00:42:07.054234 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-19 00:42:07.054245 | orchestrator | Thursday 19 March 2026 00:42:03 +0000 (0:00:00.129) 0:00:23.275 ******** 2026-03-19 00:42:07.054256 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:42:07.054267 | orchestrator | 2026-03-19 00:42:07.054278 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-19 00:42:07.054288 | orchestrator | Thursday 19 March 2026 00:42:03 +0000 (0:00:00.127) 0:00:23.403 ******** 2026-03-19 00:42:07.054299 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:42:07.054310 | orchestrator | 2026-03-19 00:42:07.054321 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-19 00:42:07.054332 | orchestrator | Thursday 19 March 2026 00:42:03 +0000 (0:00:00.166) 0:00:23.570 ******** 2026-03-19 00:42:07.054367 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.054379 | orchestrator | 2026-03-19 00:42:07.054390 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-19 00:42:07.054401 | orchestrator | Thursday 19 March 2026 00:42:03 +0000 (0:00:00.119) 0:00:23.689 ******** 2026-03-19 00:42:07.054412 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.054480 | orchestrator | 2026-03-19 00:42:07.054491 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-19 00:42:07.054502 | orchestrator | Thursday 19 March 2026 00:42:03 +0000 (0:00:00.257) 0:00:23.946 ******** 2026-03-19 00:42:07.054513 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.054536 | orchestrator | 2026-03-19 00:42:07.054547 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-19 00:42:07.054558 | orchestrator | Thursday 19 March 2026 00:42:04 +0000 (0:00:00.131) 0:00:24.077 ******** 2026-03-19 00:42:07.054569 | orchestrator | ok: [testbed-node-4] => { 2026-03-19 00:42:07.054580 | orchestrator |  "ceph_osd_devices": { 2026-03-19 00:42:07.054591 | orchestrator |  "sdb": { 2026-03-19 00:42:07.054602 | orchestrator |  "osd_lvm_uuid": "c9339aa0-dcb3-5462-b16c-1d446efe678c" 2026-03-19 00:42:07.054614 | orchestrator |  }, 2026-03-19 00:42:07.054625 | orchestrator |  "sdc": { 2026-03-19 00:42:07.054635 | orchestrator |  "osd_lvm_uuid": "0813f2fe-0b5e-5f32-866c-c0f68041cbc1" 2026-03-19 00:42:07.054646 | orchestrator |  } 2026-03-19 00:42:07.054657 | orchestrator |  } 2026-03-19 00:42:07.054668 | orchestrator | } 2026-03-19 00:42:07.054679 | orchestrator | 2026-03-19 00:42:07.054690 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-19 00:42:07.054701 | orchestrator | Thursday 19 March 2026 00:42:04 +0000 (0:00:00.131) 0:00:24.209 ******** 2026-03-19 00:42:07.054712 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.054723 | orchestrator | 2026-03-19 00:42:07.054734 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-19 00:42:07.054745 | orchestrator | Thursday 19 March 2026 00:42:04 +0000 (0:00:00.109) 0:00:24.318 ******** 2026-03-19 00:42:07.054756 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.054767 | orchestrator | 2026-03-19 00:42:07.054777 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-19 00:42:07.054789 | orchestrator | Thursday 19 March 2026 00:42:04 +0000 (0:00:00.107) 0:00:24.426 ******** 2026-03-19 00:42:07.054799 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:42:07.054810 | orchestrator | 2026-03-19 00:42:07.054821 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-19 00:42:07.054840 | orchestrator | Thursday 19 March 2026 00:42:04 +0000 (0:00:00.110) 0:00:24.536 ******** 2026-03-19 00:42:07.054851 | orchestrator | changed: [testbed-node-4] => { 2026-03-19 00:42:07.054862 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-19 00:42:07.054873 | orchestrator |  "ceph_osd_devices": { 2026-03-19 00:42:07.054884 | orchestrator |  "sdb": { 2026-03-19 00:42:07.054895 | orchestrator |  "osd_lvm_uuid": "c9339aa0-dcb3-5462-b16c-1d446efe678c" 2026-03-19 00:42:07.054906 | orchestrator |  }, 2026-03-19 00:42:07.054917 | orchestrator |  "sdc": { 2026-03-19 00:42:07.054927 | orchestrator |  "osd_lvm_uuid": "0813f2fe-0b5e-5f32-866c-c0f68041cbc1" 2026-03-19 00:42:07.054939 | orchestrator |  } 2026-03-19 00:42:07.054949 | orchestrator |  }, 2026-03-19 00:42:07.054960 | orchestrator |  "lvm_volumes": [ 2026-03-19 00:42:07.054971 | orchestrator |  { 2026-03-19 00:42:07.054982 | orchestrator |  "data": "osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c", 2026-03-19 00:42:07.054993 | orchestrator |  "data_vg": "ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c" 2026-03-19 00:42:07.055003 | orchestrator |  }, 2026-03-19 00:42:07.055014 | orchestrator |  { 2026-03-19 00:42:07.055025 | orchestrator |  "data": "osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1", 2026-03-19 00:42:07.055036 | orchestrator |  "data_vg": "ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1" 2026-03-19 00:42:07.055047 | orchestrator |  } 2026-03-19 00:42:07.055058 | orchestrator |  ] 2026-03-19 00:42:07.055069 | orchestrator |  } 2026-03-19 00:42:07.055079 | orchestrator | } 2026-03-19 00:42:07.055090 | orchestrator | 2026-03-19 00:42:07.055101 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-19 00:42:07.055112 | orchestrator | Thursday 19 March 2026 00:42:04 +0000 (0:00:00.179) 0:00:24.716 ******** 2026-03-19 00:42:07.055123 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-19 00:42:07.055133 | orchestrator | 2026-03-19 00:42:07.055154 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-19 00:42:07.055165 | orchestrator | 2026-03-19 00:42:07.055175 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-19 00:42:07.055186 | orchestrator | Thursday 19 March 2026 00:42:05 +0000 (0:00:00.913) 0:00:25.629 ******** 2026-03-19 00:42:07.055197 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-19 00:42:07.055208 | orchestrator | 2026-03-19 00:42:07.055219 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-19 00:42:07.055230 | orchestrator | Thursday 19 March 2026 00:42:05 +0000 (0:00:00.383) 0:00:26.012 ******** 2026-03-19 00:42:07.055240 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:42:07.055251 | orchestrator | 2026-03-19 00:42:07.055262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:07.055273 | orchestrator | Thursday 19 March 2026 00:42:06 +0000 (0:00:00.676) 0:00:26.689 ******** 2026-03-19 00:42:07.055284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-19 00:42:07.055295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-19 00:42:07.055305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-19 00:42:07.055316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-19 00:42:07.055327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-19 00:42:07.055345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-19 00:42:13.888010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-19 00:42:13.888157 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-19 00:42:13.888186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-19 00:42:13.888205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-19 00:42:13.888223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-19 00:42:13.888244 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-19 00:42:13.888264 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-19 00:42:13.888285 | orchestrator | 2026-03-19 00:42:13.888304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:13.888326 | orchestrator | Thursday 19 March 2026 00:42:07 +0000 (0:00:00.383) 0:00:27.073 ******** 2026-03-19 00:42:13.888346 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.888368 | orchestrator | 2026-03-19 00:42:13.888388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:13.888409 | orchestrator | Thursday 19 March 2026 00:42:07 +0000 (0:00:00.259) 0:00:27.333 ******** 2026-03-19 00:42:13.888501 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.888522 | orchestrator | 2026-03-19 00:42:13.888543 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:13.888565 | orchestrator | Thursday 19 March 2026 00:42:07 +0000 (0:00:00.238) 0:00:27.571 ******** 2026-03-19 00:42:13.888584 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.888597 | orchestrator | 2026-03-19 00:42:13.888609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:13.888622 | orchestrator | Thursday 19 March 2026 00:42:07 +0000 (0:00:00.161) 0:00:27.732 ******** 2026-03-19 00:42:13.888634 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.888647 | orchestrator | 2026-03-19 00:42:13.888659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:13.888671 | orchestrator | Thursday 19 March 2026 00:42:07 +0000 (0:00:00.159) 0:00:27.892 ******** 2026-03-19 00:42:13.888728 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.888741 | orchestrator | 2026-03-19 00:42:13.888754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:13.888767 | orchestrator | Thursday 19 March 2026 00:42:08 +0000 (0:00:00.159) 0:00:28.051 ******** 2026-03-19 00:42:13.888785 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.888806 | orchestrator | 2026-03-19 00:42:13.888827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:13.888846 | orchestrator | Thursday 19 March 2026 00:42:08 +0000 (0:00:00.145) 0:00:28.196 ******** 2026-03-19 00:42:13.888866 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.888885 | orchestrator | 2026-03-19 00:42:13.888904 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:13.888926 | orchestrator | Thursday 19 March 2026 00:42:08 +0000 (0:00:00.171) 0:00:28.368 ******** 2026-03-19 00:42:13.888947 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.888967 | orchestrator | 2026-03-19 00:42:13.888983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:13.888994 | orchestrator | Thursday 19 March 2026 00:42:08 +0000 (0:00:00.198) 0:00:28.566 ******** 2026-03-19 00:42:13.889004 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99) 2026-03-19 00:42:13.889017 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99) 2026-03-19 00:42:13.889028 | orchestrator | 2026-03-19 00:42:13.889039 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:13.889049 | orchestrator | Thursday 19 March 2026 00:42:09 +0000 (0:00:00.497) 0:00:29.064 ******** 2026-03-19 00:42:13.889080 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2e1c1462-5959-44f4-a623-e25e33d313c5) 2026-03-19 00:42:13.889092 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2e1c1462-5959-44f4-a623-e25e33d313c5) 2026-03-19 00:42:13.889103 | orchestrator | 2026-03-19 00:42:13.889113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:13.889124 | orchestrator | Thursday 19 March 2026 00:42:09 +0000 (0:00:00.683) 0:00:29.747 ******** 2026-03-19 00:42:13.889135 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_28009f26-7505-45c2-833e-d396e3f8b400) 2026-03-19 00:42:13.889146 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_28009f26-7505-45c2-833e-d396e3f8b400) 2026-03-19 00:42:13.889157 | orchestrator | 2026-03-19 00:42:13.889167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:13.889178 | orchestrator | Thursday 19 March 2026 00:42:10 +0000 (0:00:00.374) 0:00:30.122 ******** 2026-03-19 00:42:13.889189 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e4aaa0c2-0099-489f-8e98-802ea2f51c85) 2026-03-19 00:42:13.889199 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e4aaa0c2-0099-489f-8e98-802ea2f51c85) 2026-03-19 00:42:13.889210 | orchestrator | 2026-03-19 00:42:13.889221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:42:13.889232 | orchestrator | Thursday 19 March 2026 00:42:10 +0000 (0:00:00.300) 0:00:30.423 ******** 2026-03-19 00:42:13.889242 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-19 00:42:13.889253 | orchestrator | 2026-03-19 00:42:13.889264 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:13.889295 | orchestrator | Thursday 19 March 2026 00:42:10 +0000 (0:00:00.238) 0:00:30.661 ******** 2026-03-19 00:42:13.889307 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-19 00:42:13.889318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-19 00:42:13.889329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-19 00:42:13.889340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-19 00:42:13.889360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-19 00:42:13.889371 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-19 00:42:13.889382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-19 00:42:13.889392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-19 00:42:13.889403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-19 00:42:13.889447 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-19 00:42:13.889460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-19 00:42:13.889471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-19 00:42:13.889481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-19 00:42:13.889492 | orchestrator | 2026-03-19 00:42:13.889503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:13.889513 | orchestrator | Thursday 19 March 2026 00:42:10 +0000 (0:00:00.289) 0:00:30.950 ******** 2026-03-19 00:42:13.889524 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.889534 | orchestrator | 2026-03-19 00:42:13.889545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:13.889556 | orchestrator | Thursday 19 March 2026 00:42:11 +0000 (0:00:00.158) 0:00:31.109 ******** 2026-03-19 00:42:13.889566 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.889577 | orchestrator | 2026-03-19 00:42:13.889587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:13.889598 | orchestrator | Thursday 19 March 2026 00:42:11 +0000 (0:00:00.139) 0:00:31.248 ******** 2026-03-19 00:42:13.889608 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.889619 | orchestrator | 2026-03-19 00:42:13.889630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:13.889640 | orchestrator | Thursday 19 March 2026 00:42:11 +0000 (0:00:00.171) 0:00:31.420 ******** 2026-03-19 00:42:13.889651 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.889662 | orchestrator | 2026-03-19 00:42:13.889672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:13.889683 | orchestrator | Thursday 19 March 2026 00:42:11 +0000 (0:00:00.168) 0:00:31.588 ******** 2026-03-19 00:42:13.889693 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.889704 | orchestrator | 2026-03-19 00:42:13.889715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:13.889725 | orchestrator | Thursday 19 March 2026 00:42:11 +0000 (0:00:00.150) 0:00:31.738 ******** 2026-03-19 00:42:13.889736 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.889747 | orchestrator | 2026-03-19 00:42:13.889757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:13.889768 | orchestrator | Thursday 19 March 2026 00:42:12 +0000 (0:00:00.492) 0:00:32.231 ******** 2026-03-19 00:42:13.889778 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.889789 | orchestrator | 2026-03-19 00:42:13.889799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:13.889810 | orchestrator | Thursday 19 March 2026 00:42:12 +0000 (0:00:00.173) 0:00:32.404 ******** 2026-03-19 00:42:13.889820 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.889831 | orchestrator | 2026-03-19 00:42:13.889842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:13.889852 | orchestrator | Thursday 19 March 2026 00:42:12 +0000 (0:00:00.178) 0:00:32.583 ******** 2026-03-19 00:42:13.889863 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-19 00:42:13.889881 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-19 00:42:13.889892 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-19 00:42:13.889902 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-19 00:42:13.889913 | orchestrator | 2026-03-19 00:42:13.889923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:13.889934 | orchestrator | Thursday 19 March 2026 00:42:13 +0000 (0:00:00.602) 0:00:33.186 ******** 2026-03-19 00:42:13.889945 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.889956 | orchestrator | 2026-03-19 00:42:13.889966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:13.889977 | orchestrator | Thursday 19 March 2026 00:42:13 +0000 (0:00:00.189) 0:00:33.375 ******** 2026-03-19 00:42:13.889988 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.889998 | orchestrator | 2026-03-19 00:42:13.890009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:13.890077 | orchestrator | Thursday 19 March 2026 00:42:13 +0000 (0:00:00.173) 0:00:33.548 ******** 2026-03-19 00:42:13.890089 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.890100 | orchestrator | 2026-03-19 00:42:13.890111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:42:13.890122 | orchestrator | Thursday 19 March 2026 00:42:13 +0000 (0:00:00.189) 0:00:33.738 ******** 2026-03-19 00:42:13.890132 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:13.890143 | orchestrator | 2026-03-19 00:42:13.890162 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-19 00:42:17.677525 | orchestrator | Thursday 19 March 2026 00:42:13 +0000 (0:00:00.171) 0:00:33.909 ******** 2026-03-19 00:42:17.677629 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-19 00:42:17.677642 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-19 00:42:17.677652 | orchestrator | 2026-03-19 00:42:17.677663 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-19 00:42:17.677673 | orchestrator | Thursday 19 March 2026 00:42:14 +0000 (0:00:00.157) 0:00:34.067 ******** 2026-03-19 00:42:17.677682 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:17.677693 | orchestrator | 2026-03-19 00:42:17.677703 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-19 00:42:17.677712 | orchestrator | Thursday 19 March 2026 00:42:14 +0000 (0:00:00.180) 0:00:34.248 ******** 2026-03-19 00:42:17.677747 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:17.677764 | orchestrator | 2026-03-19 00:42:17.677780 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-19 00:42:17.677797 | orchestrator | Thursday 19 March 2026 00:42:14 +0000 (0:00:00.117) 0:00:34.365 ******** 2026-03-19 00:42:17.677813 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:17.677828 | orchestrator | 2026-03-19 00:42:17.677846 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-19 00:42:17.677863 | orchestrator | Thursday 19 March 2026 00:42:14 +0000 (0:00:00.128) 0:00:34.494 ******** 2026-03-19 00:42:17.677880 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:42:17.677892 | orchestrator | 2026-03-19 00:42:17.677901 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-19 00:42:17.677911 | orchestrator | Thursday 19 March 2026 00:42:14 +0000 (0:00:00.278) 0:00:34.772 ******** 2026-03-19 00:42:17.677921 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f7952abd-f19d-5f54-b846-7c46d615b8fb'}}) 2026-03-19 00:42:17.677937 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '056512d9-3a02-5302-afc2-fa0158449af3'}}) 2026-03-19 00:42:17.677947 | orchestrator | 2026-03-19 00:42:17.677963 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-19 00:42:17.677975 | orchestrator | Thursday 19 March 2026 00:42:14 +0000 (0:00:00.150) 0:00:34.922 ******** 2026-03-19 00:42:17.677987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f7952abd-f19d-5f54-b846-7c46d615b8fb'}})  2026-03-19 00:42:17.678099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '056512d9-3a02-5302-afc2-fa0158449af3'}})  2026-03-19 00:42:17.678118 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:17.678132 | orchestrator | 2026-03-19 00:42:17.678144 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-19 00:42:17.678154 | orchestrator | Thursday 19 March 2026 00:42:15 +0000 (0:00:00.139) 0:00:35.062 ******** 2026-03-19 00:42:17.678163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f7952abd-f19d-5f54-b846-7c46d615b8fb'}})  2026-03-19 00:42:17.678172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '056512d9-3a02-5302-afc2-fa0158449af3'}})  2026-03-19 00:42:17.678181 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:17.678189 | orchestrator | 2026-03-19 00:42:17.678197 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-19 00:42:17.678205 | orchestrator | Thursday 19 March 2026 00:42:15 +0000 (0:00:00.133) 0:00:35.196 ******** 2026-03-19 00:42:17.678212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f7952abd-f19d-5f54-b846-7c46d615b8fb'}})  2026-03-19 00:42:17.678221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '056512d9-3a02-5302-afc2-fa0158449af3'}})  2026-03-19 00:42:17.678228 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:17.678236 | orchestrator | 2026-03-19 00:42:17.678244 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-19 00:42:17.678255 | orchestrator | Thursday 19 March 2026 00:42:15 +0000 (0:00:00.126) 0:00:35.322 ******** 2026-03-19 00:42:17.678269 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:42:17.678281 | orchestrator | 2026-03-19 00:42:17.678294 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-19 00:42:17.678308 | orchestrator | Thursday 19 March 2026 00:42:15 +0000 (0:00:00.126) 0:00:35.448 ******** 2026-03-19 00:42:17.678321 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:42:17.678335 | orchestrator | 2026-03-19 00:42:17.678349 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-19 00:42:17.678363 | orchestrator | Thursday 19 March 2026 00:42:15 +0000 (0:00:00.130) 0:00:35.579 ******** 2026-03-19 00:42:17.678377 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:17.678391 | orchestrator | 2026-03-19 00:42:17.678404 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-19 00:42:17.678438 | orchestrator | Thursday 19 March 2026 00:42:15 +0000 (0:00:00.132) 0:00:35.711 ******** 2026-03-19 00:42:17.678446 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:17.678454 | orchestrator | 2026-03-19 00:42:17.678462 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-19 00:42:17.678470 | orchestrator | Thursday 19 March 2026 00:42:15 +0000 (0:00:00.114) 0:00:35.826 ******** 2026-03-19 00:42:17.678478 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:17.678485 | orchestrator | 2026-03-19 00:42:17.678493 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-19 00:42:17.678501 | orchestrator | Thursday 19 March 2026 00:42:15 +0000 (0:00:00.135) 0:00:35.961 ******** 2026-03-19 00:42:17.678509 | orchestrator | ok: [testbed-node-5] => { 2026-03-19 00:42:17.678517 | orchestrator |  "ceph_osd_devices": { 2026-03-19 00:42:17.678525 | orchestrator |  "sdb": { 2026-03-19 00:42:17.678553 | orchestrator |  "osd_lvm_uuid": "f7952abd-f19d-5f54-b846-7c46d615b8fb" 2026-03-19 00:42:17.678562 | orchestrator |  }, 2026-03-19 00:42:17.678570 | orchestrator |  "sdc": { 2026-03-19 00:42:17.678578 | orchestrator |  "osd_lvm_uuid": "056512d9-3a02-5302-afc2-fa0158449af3" 2026-03-19 00:42:17.678586 | orchestrator |  } 2026-03-19 00:42:17.678594 | orchestrator |  } 2026-03-19 00:42:17.678602 | orchestrator | } 2026-03-19 00:42:17.678610 | orchestrator | 2026-03-19 00:42:17.678627 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-19 00:42:17.678636 | orchestrator | Thursday 19 March 2026 00:42:16 +0000 (0:00:00.114) 0:00:36.076 ******** 2026-03-19 00:42:17.678643 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:17.678651 | orchestrator | 2026-03-19 00:42:17.678659 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-19 00:42:17.678667 | orchestrator | Thursday 19 March 2026 00:42:16 +0000 (0:00:00.121) 0:00:36.197 ******** 2026-03-19 00:42:17.678675 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:17.678683 | orchestrator | 2026-03-19 00:42:17.678691 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-19 00:42:17.678698 | orchestrator | Thursday 19 March 2026 00:42:16 +0000 (0:00:00.227) 0:00:36.425 ******** 2026-03-19 00:42:17.678706 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:42:17.678714 | orchestrator | 2026-03-19 00:42:17.678722 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-19 00:42:17.678733 | orchestrator | Thursday 19 March 2026 00:42:16 +0000 (0:00:00.128) 0:00:36.553 ******** 2026-03-19 00:42:17.678746 | orchestrator | changed: [testbed-node-5] => { 2026-03-19 00:42:17.678759 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-19 00:42:17.678772 | orchestrator |  "ceph_osd_devices": { 2026-03-19 00:42:17.678785 | orchestrator |  "sdb": { 2026-03-19 00:42:17.678799 | orchestrator |  "osd_lvm_uuid": "f7952abd-f19d-5f54-b846-7c46d615b8fb" 2026-03-19 00:42:17.678812 | orchestrator |  }, 2026-03-19 00:42:17.678826 | orchestrator |  "sdc": { 2026-03-19 00:42:17.678834 | orchestrator |  "osd_lvm_uuid": "056512d9-3a02-5302-afc2-fa0158449af3" 2026-03-19 00:42:17.678842 | orchestrator |  } 2026-03-19 00:42:17.678850 | orchestrator |  }, 2026-03-19 00:42:17.678858 | orchestrator |  "lvm_volumes": [ 2026-03-19 00:42:17.678866 | orchestrator |  { 2026-03-19 00:42:17.678877 | orchestrator |  "data": "osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb", 2026-03-19 00:42:17.678891 | orchestrator |  "data_vg": "ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb" 2026-03-19 00:42:17.678904 | orchestrator |  }, 2026-03-19 00:42:17.678921 | orchestrator |  { 2026-03-19 00:42:17.678933 | orchestrator |  "data": "osd-block-056512d9-3a02-5302-afc2-fa0158449af3", 2026-03-19 00:42:17.678946 | orchestrator |  "data_vg": "ceph-056512d9-3a02-5302-afc2-fa0158449af3" 2026-03-19 00:42:17.678959 | orchestrator |  } 2026-03-19 00:42:17.678972 | orchestrator |  ] 2026-03-19 00:42:17.678985 | orchestrator |  } 2026-03-19 00:42:17.678998 | orchestrator | } 2026-03-19 00:42:17.679012 | orchestrator | 2026-03-19 00:42:17.679025 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-19 00:42:17.679038 | orchestrator | Thursday 19 March 2026 00:42:16 +0000 (0:00:00.190) 0:00:36.744 ******** 2026-03-19 00:42:17.679053 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-19 00:42:17.679065 | orchestrator | 2026-03-19 00:42:17.679078 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:42:17.679091 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-19 00:42:17.679105 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-19 00:42:17.679118 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-19 00:42:17.679131 | orchestrator | 2026-03-19 00:42:17.679144 | orchestrator | 2026-03-19 00:42:17.679158 | orchestrator | 2026-03-19 00:42:17.679171 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:42:17.679184 | orchestrator | Thursday 19 March 2026 00:42:17 +0000 (0:00:00.950) 0:00:37.694 ******** 2026-03-19 00:42:17.679210 | orchestrator | =============================================================================== 2026-03-19 00:42:17.679223 | orchestrator | Write configuration file ------------------------------------------------ 3.97s 2026-03-19 00:42:17.679237 | orchestrator | Get initial list of available block devices ----------------------------- 1.16s 2026-03-19 00:42:17.679261 | orchestrator | Add known links to the list of available block devices ------------------ 1.16s 2026-03-19 00:42:17.679275 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2026-03-19 00:42:17.679289 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.86s 2026-03-19 00:42:17.679301 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-03-19 00:42:17.679313 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-03-19 00:42:17.679326 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-03-19 00:42:17.679339 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-03-19 00:42:17.679351 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-03-19 00:42:17.679365 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-03-19 00:42:17.679377 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.58s 2026-03-19 00:42:17.679390 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2026-03-19 00:42:17.679473 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2026-03-19 00:42:17.903992 | orchestrator | Print configuration data ------------------------------------------------ 0.55s 2026-03-19 00:42:17.904087 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.53s 2026-03-19 00:42:17.904096 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.52s 2026-03-19 00:42:17.904102 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2026-03-19 00:42:17.904109 | orchestrator | Add known partitions to the list of available block devices ------------- 0.50s 2026-03-19 00:42:17.904115 | orchestrator | Set WAL devices config data --------------------------------------------- 0.50s 2026-03-19 00:42:39.515580 | orchestrator | 2026-03-19 00:42:39 | INFO  | Task 8bbdcbcd-736d-4cca-95a2-b180a0f2673d (sync inventory) is running in background. Output coming soon. 2026-03-19 00:43:10.159130 | orchestrator | 2026-03-19 00:42:41 | INFO  | Starting group_vars file reorganization 2026-03-19 00:43:10.159223 | orchestrator | 2026-03-19 00:42:41 | INFO  | Moved 0 file(s) to their respective directories 2026-03-19 00:43:10.159235 | orchestrator | 2026-03-19 00:42:41 | INFO  | Group_vars file reorganization completed 2026-03-19 00:43:10.159242 | orchestrator | 2026-03-19 00:42:44 | INFO  | Starting variable preparation from inventory 2026-03-19 00:43:10.159250 | orchestrator | 2026-03-19 00:42:47 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-19 00:43:10.159254 | orchestrator | 2026-03-19 00:42:47 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-19 00:43:10.159275 | orchestrator | 2026-03-19 00:42:47 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-19 00:43:10.159280 | orchestrator | 2026-03-19 00:42:47 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-19 00:43:10.159284 | orchestrator | 2026-03-19 00:42:47 | INFO  | Variable preparation completed 2026-03-19 00:43:10.159288 | orchestrator | 2026-03-19 00:42:48 | INFO  | Starting inventory overwrite handling 2026-03-19 00:43:10.159292 | orchestrator | 2026-03-19 00:42:48 | INFO  | Handling group overwrites in 99-overwrite 2026-03-19 00:43:10.159296 | orchestrator | 2026-03-19 00:42:48 | INFO  | Removing group frr:children from 60-generic 2026-03-19 00:43:10.159321 | orchestrator | 2026-03-19 00:42:48 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-19 00:43:10.159325 | orchestrator | 2026-03-19 00:42:48 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-19 00:43:10.159330 | orchestrator | 2026-03-19 00:42:48 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-19 00:43:10.159333 | orchestrator | 2026-03-19 00:42:48 | INFO  | Handling group overwrites in 20-roles 2026-03-19 00:43:10.159337 | orchestrator | 2026-03-19 00:42:48 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-19 00:43:10.159341 | orchestrator | 2026-03-19 00:42:48 | INFO  | Removed 5 group(s) in total 2026-03-19 00:43:10.159345 | orchestrator | 2026-03-19 00:42:48 | INFO  | Inventory overwrite handling completed 2026-03-19 00:43:10.159349 | orchestrator | 2026-03-19 00:42:49 | INFO  | Starting merge of inventory files 2026-03-19 00:43:10.159353 | orchestrator | 2026-03-19 00:42:49 | INFO  | Inventory files merged successfully 2026-03-19 00:43:10.159356 | orchestrator | 2026-03-19 00:42:54 | INFO  | Generating minified hosts file 2026-03-19 00:43:10.159360 | orchestrator | 2026-03-19 00:42:55 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-03-19 00:43:10.159365 | orchestrator | 2026-03-19 00:42:55 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-03-19 00:43:10.159369 | orchestrator | 2026-03-19 00:42:57 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-19 00:43:10.159373 | orchestrator | 2026-03-19 00:43:08 | INFO  | Successfully wrote ClusterShell configuration 2026-03-19 00:43:10.159377 | orchestrator | [master cec1845] 2026-03-19-00-43 2026-03-19 00:43:10.159382 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-03-19 00:43:10.159421 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-03-19 00:43:10.159427 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-03-19 00:43:10.159431 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-03-19 00:43:11.471338 | orchestrator | 2026-03-19 00:43:11 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-19 00:43:11.533113 | orchestrator | 2026-03-19 00:43:11 | INFO  | Task a3200f07-7e38-4237-ae2e-dbce16fa8b3e (ceph-create-lvm-devices) was prepared for execution. 2026-03-19 00:43:11.533208 | orchestrator | 2026-03-19 00:43:11 | INFO  | It takes a moment until task a3200f07-7e38-4237-ae2e-dbce16fa8b3e (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-19 00:43:23.305312 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-19 00:43:23.305445 | orchestrator | 2.16.14 2026-03-19 00:43:23.305454 | orchestrator | 2026-03-19 00:43:23.305459 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-19 00:43:23.305464 | orchestrator | 2026-03-19 00:43:23.305468 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-19 00:43:23.305473 | orchestrator | Thursday 19 March 2026 00:43:16 +0000 (0:00:00.332) 0:00:00.333 ******** 2026-03-19 00:43:23.305477 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-19 00:43:23.305482 | orchestrator | 2026-03-19 00:43:23.305486 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-19 00:43:23.305490 | orchestrator | Thursday 19 March 2026 00:43:16 +0000 (0:00:00.225) 0:00:00.558 ******** 2026-03-19 00:43:23.305494 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:43:23.305498 | orchestrator | 2026-03-19 00:43:23.305502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:23.305506 | orchestrator | Thursday 19 March 2026 00:43:16 +0000 (0:00:00.194) 0:00:00.753 ******** 2026-03-19 00:43:23.305531 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-19 00:43:23.305536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-19 00:43:23.305539 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-19 00:43:23.305543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-19 00:43:23.305547 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-19 00:43:23.305551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-19 00:43:23.305555 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-19 00:43:23.305559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-19 00:43:23.305564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-19 00:43:23.305571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-19 00:43:23.305577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-19 00:43:23.305583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-19 00:43:23.305591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-19 00:43:23.305598 | orchestrator | 2026-03-19 00:43:23.305605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:23.305611 | orchestrator | Thursday 19 March 2026 00:43:16 +0000 (0:00:00.355) 0:00:01.108 ******** 2026-03-19 00:43:23.305618 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:23.305624 | orchestrator | 2026-03-19 00:43:23.305631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:23.305638 | orchestrator | Thursday 19 March 2026 00:43:17 +0000 (0:00:00.438) 0:00:01.547 ******** 2026-03-19 00:43:23.305644 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:23.305649 | orchestrator | 2026-03-19 00:43:23.305656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:23.305663 | orchestrator | Thursday 19 March 2026 00:43:17 +0000 (0:00:00.186) 0:00:01.734 ******** 2026-03-19 00:43:23.305686 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:23.305690 | orchestrator | 2026-03-19 00:43:23.305694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:23.305698 | orchestrator | Thursday 19 March 2026 00:43:17 +0000 (0:00:00.189) 0:00:01.923 ******** 2026-03-19 00:43:23.305702 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:23.305706 | orchestrator | 2026-03-19 00:43:23.305717 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:23.305721 | orchestrator | Thursday 19 March 2026 00:43:17 +0000 (0:00:00.165) 0:00:02.088 ******** 2026-03-19 00:43:23.305724 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:23.305728 | orchestrator | 2026-03-19 00:43:23.305732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:23.305736 | orchestrator | Thursday 19 March 2026 00:43:18 +0000 (0:00:00.184) 0:00:02.272 ******** 2026-03-19 00:43:23.305740 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:23.305744 | orchestrator | 2026-03-19 00:43:23.305747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:23.305751 | orchestrator | Thursday 19 March 2026 00:43:18 +0000 (0:00:00.196) 0:00:02.469 ******** 2026-03-19 00:43:23.305755 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:23.305759 | orchestrator | 2026-03-19 00:43:23.305763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:23.305766 | orchestrator | Thursday 19 March 2026 00:43:18 +0000 (0:00:00.189) 0:00:02.658 ******** 2026-03-19 00:43:23.305770 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:23.305779 | orchestrator | 2026-03-19 00:43:23.305783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:23.305787 | orchestrator | Thursday 19 March 2026 00:43:18 +0000 (0:00:00.184) 0:00:02.842 ******** 2026-03-19 00:43:23.305791 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d) 2026-03-19 00:43:23.305796 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d) 2026-03-19 00:43:23.305799 | orchestrator | 2026-03-19 00:43:23.305803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:23.305819 | orchestrator | Thursday 19 March 2026 00:43:18 +0000 (0:00:00.420) 0:00:03.263 ******** 2026-03-19 00:43:23.305823 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cc7e233c-8cac-4df4-a011-c93cbddae1f1) 2026-03-19 00:43:23.305827 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cc7e233c-8cac-4df4-a011-c93cbddae1f1) 2026-03-19 00:43:23.305830 | orchestrator | 2026-03-19 00:43:23.305834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:23.305838 | orchestrator | Thursday 19 March 2026 00:43:19 +0000 (0:00:00.404) 0:00:03.667 ******** 2026-03-19 00:43:23.305842 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_184b1ce1-ca15-4595-9678-ac68bfb03600) 2026-03-19 00:43:23.305846 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_184b1ce1-ca15-4595-9678-ac68bfb03600) 2026-03-19 00:43:23.305849 | orchestrator | 2026-03-19 00:43:23.305853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:23.305857 | orchestrator | Thursday 19 March 2026 00:43:20 +0000 (0:00:00.620) 0:00:04.288 ******** 2026-03-19 00:43:23.305861 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d5727df1-f3c7-4916-bc14-eaaddd40c7b3) 2026-03-19 00:43:23.305864 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d5727df1-f3c7-4916-bc14-eaaddd40c7b3) 2026-03-19 00:43:23.305868 | orchestrator | 2026-03-19 00:43:23.305872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:23.305876 | orchestrator | Thursday 19 March 2026 00:43:20 +0000 (0:00:00.695) 0:00:04.983 ******** 2026-03-19 00:43:23.305879 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-19 00:43:23.305883 | orchestrator | 2026-03-19 00:43:23.305887 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:23.305894 | orchestrator | Thursday 19 March 2026 00:43:21 +0000 (0:00:00.777) 0:00:05.761 ******** 2026-03-19 00:43:23.305898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-19 00:43:23.305901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-19 00:43:23.305905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-19 00:43:23.305909 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-19 00:43:23.305913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-19 00:43:23.305917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-19 00:43:23.305920 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-19 00:43:23.305924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-19 00:43:23.305928 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-19 00:43:23.305931 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-19 00:43:23.305935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-19 00:43:23.305939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-19 00:43:23.305947 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-19 00:43:23.305951 | orchestrator | 2026-03-19 00:43:23.305955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:23.305959 | orchestrator | Thursday 19 March 2026 00:43:21 +0000 (0:00:00.450) 0:00:06.212 ******** 2026-03-19 00:43:23.305962 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:23.305966 | orchestrator | 2026-03-19 00:43:23.305970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:23.305973 | orchestrator | Thursday 19 March 2026 00:43:22 +0000 (0:00:00.206) 0:00:06.419 ******** 2026-03-19 00:43:23.305977 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:23.305981 | orchestrator | 2026-03-19 00:43:23.305985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:23.305989 | orchestrator | Thursday 19 March 2026 00:43:22 +0000 (0:00:00.218) 0:00:06.637 ******** 2026-03-19 00:43:23.305992 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:23.305996 | orchestrator | 2026-03-19 00:43:23.306000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:23.306003 | orchestrator | Thursday 19 March 2026 00:43:22 +0000 (0:00:00.199) 0:00:06.837 ******** 2026-03-19 00:43:23.306007 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:23.306011 | orchestrator | 2026-03-19 00:43:23.306051 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:23.306055 | orchestrator | Thursday 19 March 2026 00:43:22 +0000 (0:00:00.187) 0:00:07.025 ******** 2026-03-19 00:43:23.306059 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:23.306063 | orchestrator | 2026-03-19 00:43:23.306067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:23.306071 | orchestrator | Thursday 19 March 2026 00:43:22 +0000 (0:00:00.190) 0:00:07.216 ******** 2026-03-19 00:43:23.306074 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:23.306078 | orchestrator | 2026-03-19 00:43:23.306082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:23.306086 | orchestrator | Thursday 19 March 2026 00:43:23 +0000 (0:00:00.170) 0:00:07.386 ******** 2026-03-19 00:43:23.306089 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:23.306093 | orchestrator | 2026-03-19 00:43:23.306100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:30.554268 | orchestrator | Thursday 19 March 2026 00:43:23 +0000 (0:00:00.175) 0:00:07.561 ******** 2026-03-19 00:43:30.554349 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554357 | orchestrator | 2026-03-19 00:43:30.554362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:30.554367 | orchestrator | Thursday 19 March 2026 00:43:23 +0000 (0:00:00.185) 0:00:07.747 ******** 2026-03-19 00:43:30.554371 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-19 00:43:30.554376 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-19 00:43:30.554381 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-19 00:43:30.554385 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-19 00:43:30.554425 | orchestrator | 2026-03-19 00:43:30.554435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:30.554442 | orchestrator | Thursday 19 March 2026 00:43:24 +0000 (0:00:00.844) 0:00:08.592 ******** 2026-03-19 00:43:30.554448 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554453 | orchestrator | 2026-03-19 00:43:30.554459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:30.554465 | orchestrator | Thursday 19 March 2026 00:43:24 +0000 (0:00:00.171) 0:00:08.764 ******** 2026-03-19 00:43:30.554472 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554478 | orchestrator | 2026-03-19 00:43:30.554485 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:30.554518 | orchestrator | Thursday 19 March 2026 00:43:24 +0000 (0:00:00.184) 0:00:08.948 ******** 2026-03-19 00:43:30.554523 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554527 | orchestrator | 2026-03-19 00:43:30.554531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:30.554535 | orchestrator | Thursday 19 March 2026 00:43:24 +0000 (0:00:00.170) 0:00:09.119 ******** 2026-03-19 00:43:30.554539 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554542 | orchestrator | 2026-03-19 00:43:30.554547 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-19 00:43:30.554550 | orchestrator | Thursday 19 March 2026 00:43:25 +0000 (0:00:00.187) 0:00:09.307 ******** 2026-03-19 00:43:30.554554 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554558 | orchestrator | 2026-03-19 00:43:30.554562 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-19 00:43:30.554566 | orchestrator | Thursday 19 March 2026 00:43:25 +0000 (0:00:00.115) 0:00:09.423 ******** 2026-03-19 00:43:30.554570 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'}}) 2026-03-19 00:43:30.554575 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd672a78a-4132-5655-a0fe-bae0f8eb714c'}}) 2026-03-19 00:43:30.554578 | orchestrator | 2026-03-19 00:43:30.554582 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-19 00:43:30.554587 | orchestrator | Thursday 19 March 2026 00:43:25 +0000 (0:00:00.172) 0:00:09.595 ******** 2026-03-19 00:43:30.554594 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'}) 2026-03-19 00:43:30.554604 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'}) 2026-03-19 00:43:30.554612 | orchestrator | 2026-03-19 00:43:30.554617 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-19 00:43:30.554623 | orchestrator | Thursday 19 March 2026 00:43:27 +0000 (0:00:01.900) 0:00:11.496 ******** 2026-03-19 00:43:30.554629 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:30.554653 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:30.554659 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554665 | orchestrator | 2026-03-19 00:43:30.554672 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-19 00:43:30.554679 | orchestrator | Thursday 19 March 2026 00:43:27 +0000 (0:00:00.118) 0:00:11.614 ******** 2026-03-19 00:43:30.554685 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'}) 2026-03-19 00:43:30.554692 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'}) 2026-03-19 00:43:30.554696 | orchestrator | 2026-03-19 00:43:30.554700 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-19 00:43:30.554704 | orchestrator | Thursday 19 March 2026 00:43:28 +0000 (0:00:01.465) 0:00:13.080 ******** 2026-03-19 00:43:30.554708 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:30.554711 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:30.554715 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554719 | orchestrator | 2026-03-19 00:43:30.554723 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-19 00:43:30.554733 | orchestrator | Thursday 19 March 2026 00:43:28 +0000 (0:00:00.132) 0:00:13.213 ******** 2026-03-19 00:43:30.554751 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554755 | orchestrator | 2026-03-19 00:43:30.554759 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-19 00:43:30.554762 | orchestrator | Thursday 19 March 2026 00:43:29 +0000 (0:00:00.113) 0:00:13.326 ******** 2026-03-19 00:43:30.554766 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:30.554770 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:30.554774 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554777 | orchestrator | 2026-03-19 00:43:30.554781 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-19 00:43:30.554785 | orchestrator | Thursday 19 March 2026 00:43:29 +0000 (0:00:00.270) 0:00:13.597 ******** 2026-03-19 00:43:30.554789 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554792 | orchestrator | 2026-03-19 00:43:30.554796 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-19 00:43:30.554800 | orchestrator | Thursday 19 March 2026 00:43:29 +0000 (0:00:00.134) 0:00:13.731 ******** 2026-03-19 00:43:30.554804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:30.554808 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:30.554811 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554815 | orchestrator | 2026-03-19 00:43:30.554823 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-19 00:43:30.554827 | orchestrator | Thursday 19 March 2026 00:43:29 +0000 (0:00:00.136) 0:00:13.868 ******** 2026-03-19 00:43:30.554830 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554834 | orchestrator | 2026-03-19 00:43:30.554838 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-19 00:43:30.554842 | orchestrator | Thursday 19 March 2026 00:43:29 +0000 (0:00:00.122) 0:00:13.991 ******** 2026-03-19 00:43:30.554845 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:30.554849 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:30.554853 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554857 | orchestrator | 2026-03-19 00:43:30.554860 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-19 00:43:30.554864 | orchestrator | Thursday 19 March 2026 00:43:29 +0000 (0:00:00.124) 0:00:14.116 ******** 2026-03-19 00:43:30.554868 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:43:30.554872 | orchestrator | 2026-03-19 00:43:30.554876 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-19 00:43:30.554880 | orchestrator | Thursday 19 March 2026 00:43:29 +0000 (0:00:00.130) 0:00:14.246 ******** 2026-03-19 00:43:30.554883 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:30.554887 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:30.554891 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554895 | orchestrator | 2026-03-19 00:43:30.554899 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-19 00:43:30.554906 | orchestrator | Thursday 19 March 2026 00:43:30 +0000 (0:00:00.135) 0:00:14.381 ******** 2026-03-19 00:43:30.554910 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:30.554913 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:30.554917 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554921 | orchestrator | 2026-03-19 00:43:30.554925 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-19 00:43:30.554928 | orchestrator | Thursday 19 March 2026 00:43:30 +0000 (0:00:00.151) 0:00:14.533 ******** 2026-03-19 00:43:30.554932 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:30.554936 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:30.554940 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.554943 | orchestrator | 2026-03-19 00:43:30.554947 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-19 00:43:30.554951 | orchestrator | Thursday 19 March 2026 00:43:30 +0000 (0:00:00.159) 0:00:14.692 ******** 2026-03-19 00:43:30.555211 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:30.555219 | orchestrator | 2026-03-19 00:43:30.555226 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-19 00:43:30.555242 | orchestrator | Thursday 19 March 2026 00:43:30 +0000 (0:00:00.117) 0:00:14.810 ******** 2026-03-19 00:43:36.331760 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.331845 | orchestrator | 2026-03-19 00:43:36.331852 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-19 00:43:36.331858 | orchestrator | Thursday 19 March 2026 00:43:30 +0000 (0:00:00.117) 0:00:14.927 ******** 2026-03-19 00:43:36.331862 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.331867 | orchestrator | 2026-03-19 00:43:36.331871 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-19 00:43:36.331875 | orchestrator | Thursday 19 March 2026 00:43:30 +0000 (0:00:00.120) 0:00:15.048 ******** 2026-03-19 00:43:36.331879 | orchestrator | ok: [testbed-node-3] => { 2026-03-19 00:43:36.331884 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-19 00:43:36.331888 | orchestrator | } 2026-03-19 00:43:36.331893 | orchestrator | 2026-03-19 00:43:36.331897 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-19 00:43:36.331901 | orchestrator | Thursday 19 March 2026 00:43:31 +0000 (0:00:00.262) 0:00:15.311 ******** 2026-03-19 00:43:36.331905 | orchestrator | ok: [testbed-node-3] => { 2026-03-19 00:43:36.331908 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-19 00:43:36.331912 | orchestrator | } 2026-03-19 00:43:36.331916 | orchestrator | 2026-03-19 00:43:36.331920 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-19 00:43:36.331924 | orchestrator | Thursday 19 March 2026 00:43:31 +0000 (0:00:00.128) 0:00:15.440 ******** 2026-03-19 00:43:36.331928 | orchestrator | ok: [testbed-node-3] => { 2026-03-19 00:43:36.331932 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-19 00:43:36.331936 | orchestrator | } 2026-03-19 00:43:36.331940 | orchestrator | 2026-03-19 00:43:36.331943 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-19 00:43:36.331947 | orchestrator | Thursday 19 March 2026 00:43:31 +0000 (0:00:00.140) 0:00:15.581 ******** 2026-03-19 00:43:36.331951 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:43:36.331955 | orchestrator | 2026-03-19 00:43:36.331959 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-19 00:43:36.331963 | orchestrator | Thursday 19 March 2026 00:43:31 +0000 (0:00:00.648) 0:00:16.229 ******** 2026-03-19 00:43:36.331989 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:43:36.331993 | orchestrator | 2026-03-19 00:43:36.331997 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-19 00:43:36.332001 | orchestrator | Thursday 19 March 2026 00:43:32 +0000 (0:00:00.508) 0:00:16.737 ******** 2026-03-19 00:43:36.332005 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:43:36.332008 | orchestrator | 2026-03-19 00:43:36.332012 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-19 00:43:36.332016 | orchestrator | Thursday 19 March 2026 00:43:33 +0000 (0:00:00.529) 0:00:17.267 ******** 2026-03-19 00:43:36.332020 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:43:36.332024 | orchestrator | 2026-03-19 00:43:36.332028 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-19 00:43:36.332032 | orchestrator | Thursday 19 March 2026 00:43:33 +0000 (0:00:00.139) 0:00:17.406 ******** 2026-03-19 00:43:36.332035 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332039 | orchestrator | 2026-03-19 00:43:36.332043 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-19 00:43:36.332049 | orchestrator | Thursday 19 March 2026 00:43:33 +0000 (0:00:00.116) 0:00:17.522 ******** 2026-03-19 00:43:36.332055 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332061 | orchestrator | 2026-03-19 00:43:36.332067 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-19 00:43:36.332073 | orchestrator | Thursday 19 March 2026 00:43:33 +0000 (0:00:00.093) 0:00:17.616 ******** 2026-03-19 00:43:36.332079 | orchestrator | ok: [testbed-node-3] => { 2026-03-19 00:43:36.332086 | orchestrator |  "vgs_report": { 2026-03-19 00:43:36.332092 | orchestrator |  "vg": [] 2026-03-19 00:43:36.332098 | orchestrator |  } 2026-03-19 00:43:36.332104 | orchestrator | } 2026-03-19 00:43:36.332110 | orchestrator | 2026-03-19 00:43:36.332116 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-19 00:43:36.332122 | orchestrator | Thursday 19 March 2026 00:43:33 +0000 (0:00:00.152) 0:00:17.769 ******** 2026-03-19 00:43:36.332125 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332129 | orchestrator | 2026-03-19 00:43:36.332133 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-19 00:43:36.332137 | orchestrator | Thursday 19 March 2026 00:43:33 +0000 (0:00:00.123) 0:00:17.892 ******** 2026-03-19 00:43:36.332141 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332144 | orchestrator | 2026-03-19 00:43:36.332148 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-19 00:43:36.332152 | orchestrator | Thursday 19 March 2026 00:43:33 +0000 (0:00:00.140) 0:00:18.033 ******** 2026-03-19 00:43:36.332156 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332160 | orchestrator | 2026-03-19 00:43:36.332164 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-19 00:43:36.332167 | orchestrator | Thursday 19 March 2026 00:43:34 +0000 (0:00:00.271) 0:00:18.305 ******** 2026-03-19 00:43:36.332171 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332175 | orchestrator | 2026-03-19 00:43:36.332178 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-19 00:43:36.332182 | orchestrator | Thursday 19 March 2026 00:43:34 +0000 (0:00:00.127) 0:00:18.432 ******** 2026-03-19 00:43:36.332186 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332189 | orchestrator | 2026-03-19 00:43:36.332193 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-19 00:43:36.332197 | orchestrator | Thursday 19 March 2026 00:43:34 +0000 (0:00:00.129) 0:00:18.561 ******** 2026-03-19 00:43:36.332201 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332204 | orchestrator | 2026-03-19 00:43:36.332208 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-19 00:43:36.332212 | orchestrator | Thursday 19 March 2026 00:43:34 +0000 (0:00:00.112) 0:00:18.673 ******** 2026-03-19 00:43:36.332215 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332223 | orchestrator | 2026-03-19 00:43:36.332227 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-19 00:43:36.332231 | orchestrator | Thursday 19 March 2026 00:43:34 +0000 (0:00:00.121) 0:00:18.795 ******** 2026-03-19 00:43:36.332246 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332250 | orchestrator | 2026-03-19 00:43:36.332269 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-19 00:43:36.332275 | orchestrator | Thursday 19 March 2026 00:43:34 +0000 (0:00:00.106) 0:00:18.902 ******** 2026-03-19 00:43:36.332293 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332297 | orchestrator | 2026-03-19 00:43:36.332301 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-19 00:43:36.332305 | orchestrator | Thursday 19 March 2026 00:43:34 +0000 (0:00:00.097) 0:00:18.999 ******** 2026-03-19 00:43:36.332309 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332319 | orchestrator | 2026-03-19 00:43:36.332324 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-19 00:43:36.332331 | orchestrator | Thursday 19 March 2026 00:43:34 +0000 (0:00:00.123) 0:00:19.123 ******** 2026-03-19 00:43:36.332337 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332343 | orchestrator | 2026-03-19 00:43:36.332347 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-19 00:43:36.332352 | orchestrator | Thursday 19 March 2026 00:43:34 +0000 (0:00:00.118) 0:00:19.241 ******** 2026-03-19 00:43:36.332356 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332362 | orchestrator | 2026-03-19 00:43:36.332368 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-19 00:43:36.332374 | orchestrator | Thursday 19 March 2026 00:43:35 +0000 (0:00:00.117) 0:00:19.359 ******** 2026-03-19 00:43:36.332378 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332383 | orchestrator | 2026-03-19 00:43:36.332415 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-19 00:43:36.332421 | orchestrator | Thursday 19 March 2026 00:43:35 +0000 (0:00:00.123) 0:00:19.483 ******** 2026-03-19 00:43:36.332425 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332429 | orchestrator | 2026-03-19 00:43:36.332437 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-19 00:43:36.332441 | orchestrator | Thursday 19 March 2026 00:43:35 +0000 (0:00:00.133) 0:00:19.617 ******** 2026-03-19 00:43:36.332446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:36.332452 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:36.332459 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332465 | orchestrator | 2026-03-19 00:43:36.332471 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-19 00:43:36.332477 | orchestrator | Thursday 19 March 2026 00:43:35 +0000 (0:00:00.152) 0:00:19.769 ******** 2026-03-19 00:43:36.332484 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:36.332489 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:36.332494 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332499 | orchestrator | 2026-03-19 00:43:36.332503 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-19 00:43:36.332507 | orchestrator | Thursday 19 March 2026 00:43:35 +0000 (0:00:00.323) 0:00:20.092 ******** 2026-03-19 00:43:36.332511 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:36.332517 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:36.332530 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332537 | orchestrator | 2026-03-19 00:43:36.332542 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-19 00:43:36.332547 | orchestrator | Thursday 19 March 2026 00:43:35 +0000 (0:00:00.148) 0:00:20.240 ******** 2026-03-19 00:43:36.332551 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:36.332555 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:36.332560 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332564 | orchestrator | 2026-03-19 00:43:36.332568 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-19 00:43:36.332573 | orchestrator | Thursday 19 March 2026 00:43:36 +0000 (0:00:00.133) 0:00:20.373 ******** 2026-03-19 00:43:36.332577 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:36.332582 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:36.332588 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:36.332594 | orchestrator | 2026-03-19 00:43:36.332599 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-19 00:43:36.332603 | orchestrator | Thursday 19 March 2026 00:43:36 +0000 (0:00:00.156) 0:00:20.530 ******** 2026-03-19 00:43:36.332611 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:41.318101 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:41.318217 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:41.318228 | orchestrator | 2026-03-19 00:43:41.318237 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-19 00:43:41.318247 | orchestrator | Thursday 19 March 2026 00:43:36 +0000 (0:00:00.136) 0:00:20.667 ******** 2026-03-19 00:43:41.318254 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:41.318260 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:41.318266 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:41.318272 | orchestrator | 2026-03-19 00:43:41.318278 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-19 00:43:41.318285 | orchestrator | Thursday 19 March 2026 00:43:36 +0000 (0:00:00.152) 0:00:20.819 ******** 2026-03-19 00:43:41.318292 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:41.318314 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:41.318320 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:41.318327 | orchestrator | 2026-03-19 00:43:41.318333 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-19 00:43:41.318340 | orchestrator | Thursday 19 March 2026 00:43:36 +0000 (0:00:00.148) 0:00:20.968 ******** 2026-03-19 00:43:41.318347 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:43:41.318354 | orchestrator | 2026-03-19 00:43:41.318385 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-19 00:43:41.318456 | orchestrator | Thursday 19 March 2026 00:43:37 +0000 (0:00:00.505) 0:00:21.474 ******** 2026-03-19 00:43:41.318462 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:43:41.318467 | orchestrator | 2026-03-19 00:43:41.318473 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-19 00:43:41.318479 | orchestrator | Thursday 19 March 2026 00:43:37 +0000 (0:00:00.533) 0:00:22.007 ******** 2026-03-19 00:43:41.318485 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:43:41.318491 | orchestrator | 2026-03-19 00:43:41.318497 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-19 00:43:41.318503 | orchestrator | Thursday 19 March 2026 00:43:37 +0000 (0:00:00.136) 0:00:22.144 ******** 2026-03-19 00:43:41.318510 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'vg_name': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'}) 2026-03-19 00:43:41.318517 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'vg_name': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'}) 2026-03-19 00:43:41.318523 | orchestrator | 2026-03-19 00:43:41.318530 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-19 00:43:41.318536 | orchestrator | Thursday 19 March 2026 00:43:38 +0000 (0:00:00.146) 0:00:22.290 ******** 2026-03-19 00:43:41.318541 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:41.318548 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:41.318553 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:41.318558 | orchestrator | 2026-03-19 00:43:41.318565 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-19 00:43:41.318570 | orchestrator | Thursday 19 March 2026 00:43:38 +0000 (0:00:00.131) 0:00:22.422 ******** 2026-03-19 00:43:41.318575 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:41.318581 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:41.318587 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:41.318593 | orchestrator | 2026-03-19 00:43:41.318598 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-19 00:43:41.318604 | orchestrator | Thursday 19 March 2026 00:43:38 +0000 (0:00:00.321) 0:00:22.743 ******** 2026-03-19 00:43:41.318610 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'})  2026-03-19 00:43:41.318616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'})  2026-03-19 00:43:41.318622 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:43:41.318627 | orchestrator | 2026-03-19 00:43:41.318633 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-19 00:43:41.318638 | orchestrator | Thursday 19 March 2026 00:43:38 +0000 (0:00:00.146) 0:00:22.890 ******** 2026-03-19 00:43:41.318663 | orchestrator | ok: [testbed-node-3] => { 2026-03-19 00:43:41.318669 | orchestrator |  "lvm_report": { 2026-03-19 00:43:41.318675 | orchestrator |  "lv": [ 2026-03-19 00:43:41.318682 | orchestrator |  { 2026-03-19 00:43:41.318688 | orchestrator |  "lv_name": "osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0", 2026-03-19 00:43:41.318695 | orchestrator |  "vg_name": "ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0" 2026-03-19 00:43:41.318700 | orchestrator |  }, 2026-03-19 00:43:41.318712 | orchestrator |  { 2026-03-19 00:43:41.318718 | orchestrator |  "lv_name": "osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c", 2026-03-19 00:43:41.318724 | orchestrator |  "vg_name": "ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c" 2026-03-19 00:43:41.318730 | orchestrator |  } 2026-03-19 00:43:41.318737 | orchestrator |  ], 2026-03-19 00:43:41.318743 | orchestrator |  "pv": [ 2026-03-19 00:43:41.318748 | orchestrator |  { 2026-03-19 00:43:41.318755 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-19 00:43:41.318760 | orchestrator |  "vg_name": "ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0" 2026-03-19 00:43:41.318767 | orchestrator |  }, 2026-03-19 00:43:41.318773 | orchestrator |  { 2026-03-19 00:43:41.318779 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-19 00:43:41.318785 | orchestrator |  "vg_name": "ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c" 2026-03-19 00:43:41.318791 | orchestrator |  } 2026-03-19 00:43:41.318797 | orchestrator |  ] 2026-03-19 00:43:41.318803 | orchestrator |  } 2026-03-19 00:43:41.318810 | orchestrator | } 2026-03-19 00:43:41.318816 | orchestrator | 2026-03-19 00:43:41.318822 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-19 00:43:41.318828 | orchestrator | 2026-03-19 00:43:41.318834 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-19 00:43:41.318841 | orchestrator | Thursday 19 March 2026 00:43:38 +0000 (0:00:00.248) 0:00:23.139 ******** 2026-03-19 00:43:41.318847 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-19 00:43:41.318853 | orchestrator | 2026-03-19 00:43:41.318859 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-19 00:43:41.318865 | orchestrator | Thursday 19 March 2026 00:43:39 +0000 (0:00:00.223) 0:00:23.363 ******** 2026-03-19 00:43:41.318871 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:43:41.318877 | orchestrator | 2026-03-19 00:43:41.318883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:41.318889 | orchestrator | Thursday 19 March 2026 00:43:39 +0000 (0:00:00.209) 0:00:23.573 ******** 2026-03-19 00:43:41.318895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-19 00:43:41.318902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-19 00:43:41.318908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-19 00:43:41.318914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-19 00:43:41.318920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-19 00:43:41.318926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-19 00:43:41.318932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-19 00:43:41.318938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-19 00:43:41.318943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-19 00:43:41.318958 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-19 00:43:41.318964 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-19 00:43:41.318969 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-19 00:43:41.318975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-19 00:43:41.318981 | orchestrator | 2026-03-19 00:43:41.318987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:41.318993 | orchestrator | Thursday 19 March 2026 00:43:39 +0000 (0:00:00.382) 0:00:23.955 ******** 2026-03-19 00:43:41.318999 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:41.319010 | orchestrator | 2026-03-19 00:43:41.319017 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:41.319023 | orchestrator | Thursday 19 March 2026 00:43:39 +0000 (0:00:00.172) 0:00:24.128 ******** 2026-03-19 00:43:41.319028 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:41.319034 | orchestrator | 2026-03-19 00:43:41.319040 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:41.319046 | orchestrator | Thursday 19 March 2026 00:43:40 +0000 (0:00:00.185) 0:00:24.313 ******** 2026-03-19 00:43:41.319051 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:41.319057 | orchestrator | 2026-03-19 00:43:41.319064 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:41.319070 | orchestrator | Thursday 19 March 2026 00:43:40 +0000 (0:00:00.188) 0:00:24.501 ******** 2026-03-19 00:43:41.319076 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:41.319082 | orchestrator | 2026-03-19 00:43:41.319088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:41.319094 | orchestrator | Thursday 19 March 2026 00:43:40 +0000 (0:00:00.668) 0:00:25.170 ******** 2026-03-19 00:43:41.319099 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:41.319105 | orchestrator | 2026-03-19 00:43:41.319111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:41.319117 | orchestrator | Thursday 19 March 2026 00:43:41 +0000 (0:00:00.205) 0:00:25.375 ******** 2026-03-19 00:43:41.319123 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:41.319129 | orchestrator | 2026-03-19 00:43:41.319140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:52.221133 | orchestrator | Thursday 19 March 2026 00:43:41 +0000 (0:00:00.197) 0:00:25.573 ******** 2026-03-19 00:43:52.221242 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.221256 | orchestrator | 2026-03-19 00:43:52.221268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:52.221278 | orchestrator | Thursday 19 March 2026 00:43:41 +0000 (0:00:00.199) 0:00:25.773 ******** 2026-03-19 00:43:52.221288 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.221295 | orchestrator | 2026-03-19 00:43:52.221300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:52.221307 | orchestrator | Thursday 19 March 2026 00:43:41 +0000 (0:00:00.200) 0:00:25.973 ******** 2026-03-19 00:43:52.221313 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b) 2026-03-19 00:43:52.221319 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b) 2026-03-19 00:43:52.221325 | orchestrator | 2026-03-19 00:43:52.221331 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:52.221336 | orchestrator | Thursday 19 March 2026 00:43:42 +0000 (0:00:00.421) 0:00:26.395 ******** 2026-03-19 00:43:52.221342 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4a32226a-f2aa-4275-90ab-4bb1f7d2ca8f) 2026-03-19 00:43:52.221348 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4a32226a-f2aa-4275-90ab-4bb1f7d2ca8f) 2026-03-19 00:43:52.221353 | orchestrator | 2026-03-19 00:43:52.221376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:52.221382 | orchestrator | Thursday 19 March 2026 00:43:42 +0000 (0:00:00.422) 0:00:26.818 ******** 2026-03-19 00:43:52.221420 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_29b2bcaa-94cd-4bc7-8ad5-d8dbf0337361) 2026-03-19 00:43:52.221429 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_29b2bcaa-94cd-4bc7-8ad5-d8dbf0337361) 2026-03-19 00:43:52.221434 | orchestrator | 2026-03-19 00:43:52.221440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:52.221446 | orchestrator | Thursday 19 March 2026 00:43:43 +0000 (0:00:00.567) 0:00:27.385 ******** 2026-03-19 00:43:52.221452 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c44a5407-fe6a-4dde-8ed4-2e9072e3ed0d) 2026-03-19 00:43:52.221481 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c44a5407-fe6a-4dde-8ed4-2e9072e3ed0d) 2026-03-19 00:43:52.221487 | orchestrator | 2026-03-19 00:43:52.221492 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:43:52.221498 | orchestrator | Thursday 19 March 2026 00:43:43 +0000 (0:00:00.541) 0:00:27.927 ******** 2026-03-19 00:43:52.221504 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-19 00:43:52.221510 | orchestrator | 2026-03-19 00:43:52.221516 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:52.221521 | orchestrator | Thursday 19 March 2026 00:43:44 +0000 (0:00:00.368) 0:00:28.295 ******** 2026-03-19 00:43:52.221527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-19 00:43:52.221534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-19 00:43:52.221539 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-19 00:43:52.221545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-19 00:43:52.221550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-19 00:43:52.221556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-19 00:43:52.221561 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-19 00:43:52.221567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-19 00:43:52.221573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-19 00:43:52.221579 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-19 00:43:52.221585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-19 00:43:52.221590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-19 00:43:52.221596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-19 00:43:52.221601 | orchestrator | 2026-03-19 00:43:52.221607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:52.221613 | orchestrator | Thursday 19 March 2026 00:43:44 +0000 (0:00:00.609) 0:00:28.904 ******** 2026-03-19 00:43:52.221618 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.221624 | orchestrator | 2026-03-19 00:43:52.221631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:52.221637 | orchestrator | Thursday 19 March 2026 00:43:44 +0000 (0:00:00.215) 0:00:29.119 ******** 2026-03-19 00:43:52.221644 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.221651 | orchestrator | 2026-03-19 00:43:52.221657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:52.221664 | orchestrator | Thursday 19 March 2026 00:43:45 +0000 (0:00:00.236) 0:00:29.356 ******** 2026-03-19 00:43:52.221670 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.221676 | orchestrator | 2026-03-19 00:43:52.221697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:52.221704 | orchestrator | Thursday 19 March 2026 00:43:45 +0000 (0:00:00.233) 0:00:29.589 ******** 2026-03-19 00:43:52.221711 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.221717 | orchestrator | 2026-03-19 00:43:52.221724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:52.221731 | orchestrator | Thursday 19 March 2026 00:43:45 +0000 (0:00:00.206) 0:00:29.796 ******** 2026-03-19 00:43:52.221737 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.221744 | orchestrator | 2026-03-19 00:43:52.221751 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:52.221764 | orchestrator | Thursday 19 March 2026 00:43:45 +0000 (0:00:00.220) 0:00:30.016 ******** 2026-03-19 00:43:52.221770 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.221776 | orchestrator | 2026-03-19 00:43:52.221782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:52.221788 | orchestrator | Thursday 19 March 2026 00:43:45 +0000 (0:00:00.220) 0:00:30.237 ******** 2026-03-19 00:43:52.221794 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.221799 | orchestrator | 2026-03-19 00:43:52.221805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:52.221811 | orchestrator | Thursday 19 March 2026 00:43:46 +0000 (0:00:00.222) 0:00:30.460 ******** 2026-03-19 00:43:52.221816 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.221822 | orchestrator | 2026-03-19 00:43:52.221828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:52.221837 | orchestrator | Thursday 19 March 2026 00:43:46 +0000 (0:00:00.205) 0:00:30.665 ******** 2026-03-19 00:43:52.221844 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-19 00:43:52.221850 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-19 00:43:52.221856 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-19 00:43:52.221861 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-19 00:43:52.221867 | orchestrator | 2026-03-19 00:43:52.221873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:52.221879 | orchestrator | Thursday 19 March 2026 00:43:47 +0000 (0:00:01.003) 0:00:31.668 ******** 2026-03-19 00:43:52.221884 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.221890 | orchestrator | 2026-03-19 00:43:52.221896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:52.221901 | orchestrator | Thursday 19 March 2026 00:43:47 +0000 (0:00:00.191) 0:00:31.860 ******** 2026-03-19 00:43:52.221907 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.221913 | orchestrator | 2026-03-19 00:43:52.221918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:52.221924 | orchestrator | Thursday 19 March 2026 00:43:47 +0000 (0:00:00.213) 0:00:32.074 ******** 2026-03-19 00:43:52.221930 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.221936 | orchestrator | 2026-03-19 00:43:52.221941 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:43:52.221947 | orchestrator | Thursday 19 March 2026 00:43:48 +0000 (0:00:00.682) 0:00:32.756 ******** 2026-03-19 00:43:52.221953 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.221958 | orchestrator | 2026-03-19 00:43:52.221964 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-19 00:43:52.221970 | orchestrator | Thursday 19 March 2026 00:43:48 +0000 (0:00:00.211) 0:00:32.968 ******** 2026-03-19 00:43:52.221975 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.221981 | orchestrator | 2026-03-19 00:43:52.221987 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-19 00:43:52.221992 | orchestrator | Thursday 19 March 2026 00:43:48 +0000 (0:00:00.147) 0:00:33.115 ******** 2026-03-19 00:43:52.221998 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c9339aa0-dcb3-5462-b16c-1d446efe678c'}}) 2026-03-19 00:43:52.222004 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0813f2fe-0b5e-5f32-866c-c0f68041cbc1'}}) 2026-03-19 00:43:52.222010 | orchestrator | 2026-03-19 00:43:52.222043 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-19 00:43:52.222049 | orchestrator | Thursday 19 March 2026 00:43:49 +0000 (0:00:00.189) 0:00:33.305 ******** 2026-03-19 00:43:52.222055 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'}) 2026-03-19 00:43:52.222061 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'}) 2026-03-19 00:43:52.222071 | orchestrator | 2026-03-19 00:43:52.222077 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-19 00:43:52.222083 | orchestrator | Thursday 19 March 2026 00:43:50 +0000 (0:00:01.865) 0:00:35.170 ******** 2026-03-19 00:43:52.222089 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:43:52.222096 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:43:52.222102 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:52.222107 | orchestrator | 2026-03-19 00:43:52.222113 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-19 00:43:52.222119 | orchestrator | Thursday 19 March 2026 00:43:51 +0000 (0:00:00.151) 0:00:35.322 ******** 2026-03-19 00:43:52.222125 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'}) 2026-03-19 00:43:52.222135 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'}) 2026-03-19 00:43:57.568997 | orchestrator | 2026-03-19 00:43:57.569085 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-19 00:43:57.569092 | orchestrator | Thursday 19 March 2026 00:43:52 +0000 (0:00:01.241) 0:00:36.564 ******** 2026-03-19 00:43:57.569097 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:43:57.569104 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:43:57.569108 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569113 | orchestrator | 2026-03-19 00:43:57.569117 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-19 00:43:57.569121 | orchestrator | Thursday 19 March 2026 00:43:52 +0000 (0:00:00.164) 0:00:36.728 ******** 2026-03-19 00:43:57.569125 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569129 | orchestrator | 2026-03-19 00:43:57.569133 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-19 00:43:57.569137 | orchestrator | Thursday 19 March 2026 00:43:52 +0000 (0:00:00.134) 0:00:36.863 ******** 2026-03-19 00:43:57.569141 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:43:57.569145 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:43:57.569149 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569152 | orchestrator | 2026-03-19 00:43:57.569156 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-19 00:43:57.569160 | orchestrator | Thursday 19 March 2026 00:43:52 +0000 (0:00:00.148) 0:00:37.011 ******** 2026-03-19 00:43:57.569164 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569168 | orchestrator | 2026-03-19 00:43:57.569172 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-19 00:43:57.569176 | orchestrator | Thursday 19 March 2026 00:43:52 +0000 (0:00:00.124) 0:00:37.136 ******** 2026-03-19 00:43:57.569179 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:43:57.569183 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:43:57.569205 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569209 | orchestrator | 2026-03-19 00:43:57.569212 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-19 00:43:57.569216 | orchestrator | Thursday 19 March 2026 00:43:53 +0000 (0:00:00.158) 0:00:37.294 ******** 2026-03-19 00:43:57.569220 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569224 | orchestrator | 2026-03-19 00:43:57.569242 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-19 00:43:57.569246 | orchestrator | Thursday 19 March 2026 00:43:53 +0000 (0:00:00.332) 0:00:37.627 ******** 2026-03-19 00:43:57.569250 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:43:57.569254 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:43:57.569258 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569261 | orchestrator | 2026-03-19 00:43:57.569265 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-19 00:43:57.569269 | orchestrator | Thursday 19 March 2026 00:43:53 +0000 (0:00:00.175) 0:00:37.803 ******** 2026-03-19 00:43:57.569273 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:43:57.569277 | orchestrator | 2026-03-19 00:43:57.569281 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-19 00:43:57.569285 | orchestrator | Thursday 19 March 2026 00:43:53 +0000 (0:00:00.143) 0:00:37.946 ******** 2026-03-19 00:43:57.569289 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:43:57.569292 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:43:57.569296 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569300 | orchestrator | 2026-03-19 00:43:57.569304 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-19 00:43:57.569307 | orchestrator | Thursday 19 March 2026 00:43:53 +0000 (0:00:00.156) 0:00:38.103 ******** 2026-03-19 00:43:57.569311 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:43:57.569315 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:43:57.569319 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569322 | orchestrator | 2026-03-19 00:43:57.569326 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-19 00:43:57.569341 | orchestrator | Thursday 19 March 2026 00:43:53 +0000 (0:00:00.144) 0:00:38.247 ******** 2026-03-19 00:43:57.569345 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:43:57.569349 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:43:57.569353 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569356 | orchestrator | 2026-03-19 00:43:57.569360 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-19 00:43:57.569364 | orchestrator | Thursday 19 March 2026 00:43:54 +0000 (0:00:00.158) 0:00:38.406 ******** 2026-03-19 00:43:57.569368 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569371 | orchestrator | 2026-03-19 00:43:57.569375 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-19 00:43:57.569379 | orchestrator | Thursday 19 March 2026 00:43:54 +0000 (0:00:00.126) 0:00:38.533 ******** 2026-03-19 00:43:57.569435 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569440 | orchestrator | 2026-03-19 00:43:57.569444 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-19 00:43:57.569458 | orchestrator | Thursday 19 March 2026 00:43:54 +0000 (0:00:00.120) 0:00:38.653 ******** 2026-03-19 00:43:57.569462 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569466 | orchestrator | 2026-03-19 00:43:57.569469 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-19 00:43:57.569473 | orchestrator | Thursday 19 March 2026 00:43:54 +0000 (0:00:00.130) 0:00:38.784 ******** 2026-03-19 00:43:57.569477 | orchestrator | ok: [testbed-node-4] => { 2026-03-19 00:43:57.569487 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-19 00:43:57.569491 | orchestrator | } 2026-03-19 00:43:57.569495 | orchestrator | 2026-03-19 00:43:57.569499 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-19 00:43:57.569503 | orchestrator | Thursday 19 March 2026 00:43:54 +0000 (0:00:00.144) 0:00:38.928 ******** 2026-03-19 00:43:57.569507 | orchestrator | ok: [testbed-node-4] => { 2026-03-19 00:43:57.569510 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-19 00:43:57.569514 | orchestrator | } 2026-03-19 00:43:57.569518 | orchestrator | 2026-03-19 00:43:57.569522 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-19 00:43:57.569526 | orchestrator | Thursday 19 March 2026 00:43:54 +0000 (0:00:00.130) 0:00:39.059 ******** 2026-03-19 00:43:57.569529 | orchestrator | ok: [testbed-node-4] => { 2026-03-19 00:43:57.569534 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-19 00:43:57.569537 | orchestrator | } 2026-03-19 00:43:57.569541 | orchestrator | 2026-03-19 00:43:57.569545 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-19 00:43:57.569548 | orchestrator | Thursday 19 March 2026 00:43:54 +0000 (0:00:00.164) 0:00:39.224 ******** 2026-03-19 00:43:57.569552 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:43:57.569556 | orchestrator | 2026-03-19 00:43:57.569560 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-19 00:43:57.569565 | orchestrator | Thursday 19 March 2026 00:43:55 +0000 (0:00:00.635) 0:00:39.859 ******** 2026-03-19 00:43:57.569569 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:43:57.569573 | orchestrator | 2026-03-19 00:43:57.569578 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-19 00:43:57.569582 | orchestrator | Thursday 19 March 2026 00:43:56 +0000 (0:00:00.497) 0:00:40.357 ******** 2026-03-19 00:43:57.569587 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:43:57.569591 | orchestrator | 2026-03-19 00:43:57.569595 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-19 00:43:57.569599 | orchestrator | Thursday 19 March 2026 00:43:56 +0000 (0:00:00.486) 0:00:40.844 ******** 2026-03-19 00:43:57.569603 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:43:57.569607 | orchestrator | 2026-03-19 00:43:57.569612 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-19 00:43:57.569616 | orchestrator | Thursday 19 March 2026 00:43:56 +0000 (0:00:00.154) 0:00:40.999 ******** 2026-03-19 00:43:57.569620 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569624 | orchestrator | 2026-03-19 00:43:57.569628 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-19 00:43:57.569633 | orchestrator | Thursday 19 March 2026 00:43:56 +0000 (0:00:00.098) 0:00:41.097 ******** 2026-03-19 00:43:57.569637 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569641 | orchestrator | 2026-03-19 00:43:57.569646 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-19 00:43:57.569650 | orchestrator | Thursday 19 March 2026 00:43:56 +0000 (0:00:00.116) 0:00:41.214 ******** 2026-03-19 00:43:57.569654 | orchestrator | ok: [testbed-node-4] => { 2026-03-19 00:43:57.569659 | orchestrator |  "vgs_report": { 2026-03-19 00:43:57.569663 | orchestrator |  "vg": [] 2026-03-19 00:43:57.569668 | orchestrator |  } 2026-03-19 00:43:57.569672 | orchestrator | } 2026-03-19 00:43:57.569679 | orchestrator | 2026-03-19 00:43:57.569684 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-19 00:43:57.569688 | orchestrator | Thursday 19 March 2026 00:43:57 +0000 (0:00:00.125) 0:00:41.340 ******** 2026-03-19 00:43:57.569692 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569696 | orchestrator | 2026-03-19 00:43:57.569701 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-19 00:43:57.569705 | orchestrator | Thursday 19 March 2026 00:43:57 +0000 (0:00:00.114) 0:00:41.455 ******** 2026-03-19 00:43:57.569709 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569713 | orchestrator | 2026-03-19 00:43:57.569717 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-19 00:43:57.569722 | orchestrator | Thursday 19 March 2026 00:43:57 +0000 (0:00:00.119) 0:00:41.574 ******** 2026-03-19 00:43:57.569726 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569730 | orchestrator | 2026-03-19 00:43:57.569734 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-19 00:43:57.569739 | orchestrator | Thursday 19 March 2026 00:43:57 +0000 (0:00:00.131) 0:00:41.706 ******** 2026-03-19 00:43:57.569743 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:43:57.569748 | orchestrator | 2026-03-19 00:43:57.569755 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-19 00:44:02.180014 | orchestrator | Thursday 19 March 2026 00:43:57 +0000 (0:00:00.119) 0:00:41.825 ******** 2026-03-19 00:44:02.180108 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180120 | orchestrator | 2026-03-19 00:44:02.180128 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-19 00:44:02.180136 | orchestrator | Thursday 19 March 2026 00:43:57 +0000 (0:00:00.121) 0:00:41.947 ******** 2026-03-19 00:44:02.180143 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180150 | orchestrator | 2026-03-19 00:44:02.180158 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-19 00:44:02.180165 | orchestrator | Thursday 19 March 2026 00:43:57 +0000 (0:00:00.312) 0:00:42.260 ******** 2026-03-19 00:44:02.180172 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180179 | orchestrator | 2026-03-19 00:44:02.180186 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-19 00:44:02.180193 | orchestrator | Thursday 19 March 2026 00:43:58 +0000 (0:00:00.114) 0:00:42.375 ******** 2026-03-19 00:44:02.180200 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180208 | orchestrator | 2026-03-19 00:44:02.180215 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-19 00:44:02.180222 | orchestrator | Thursday 19 March 2026 00:43:58 +0000 (0:00:00.128) 0:00:42.503 ******** 2026-03-19 00:44:02.180247 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180254 | orchestrator | 2026-03-19 00:44:02.180261 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-19 00:44:02.180268 | orchestrator | Thursday 19 March 2026 00:43:58 +0000 (0:00:00.138) 0:00:42.642 ******** 2026-03-19 00:44:02.180275 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180282 | orchestrator | 2026-03-19 00:44:02.180290 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-19 00:44:02.180297 | orchestrator | Thursday 19 March 2026 00:43:58 +0000 (0:00:00.126) 0:00:42.768 ******** 2026-03-19 00:44:02.180304 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180311 | orchestrator | 2026-03-19 00:44:02.180318 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-19 00:44:02.180326 | orchestrator | Thursday 19 March 2026 00:43:58 +0000 (0:00:00.109) 0:00:42.878 ******** 2026-03-19 00:44:02.180333 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180340 | orchestrator | 2026-03-19 00:44:02.180347 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-19 00:44:02.180354 | orchestrator | Thursday 19 March 2026 00:43:58 +0000 (0:00:00.126) 0:00:43.005 ******** 2026-03-19 00:44:02.180361 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180420 | orchestrator | 2026-03-19 00:44:02.180436 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-19 00:44:02.180449 | orchestrator | Thursday 19 March 2026 00:43:58 +0000 (0:00:00.123) 0:00:43.129 ******** 2026-03-19 00:44:02.180462 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180473 | orchestrator | 2026-03-19 00:44:02.180480 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-19 00:44:02.180487 | orchestrator | Thursday 19 March 2026 00:43:58 +0000 (0:00:00.124) 0:00:43.253 ******** 2026-03-19 00:44:02.180495 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:44:02.180504 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:44:02.180511 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180519 | orchestrator | 2026-03-19 00:44:02.180526 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-19 00:44:02.180533 | orchestrator | Thursday 19 March 2026 00:43:59 +0000 (0:00:00.148) 0:00:43.401 ******** 2026-03-19 00:44:02.180540 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:44:02.180549 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:44:02.180558 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180565 | orchestrator | 2026-03-19 00:44:02.180574 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-19 00:44:02.180582 | orchestrator | Thursday 19 March 2026 00:43:59 +0000 (0:00:00.161) 0:00:43.563 ******** 2026-03-19 00:44:02.180590 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:44:02.180599 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:44:02.180607 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180616 | orchestrator | 2026-03-19 00:44:02.180625 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-19 00:44:02.180633 | orchestrator | Thursday 19 March 2026 00:43:59 +0000 (0:00:00.174) 0:00:43.738 ******** 2026-03-19 00:44:02.180641 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:44:02.180650 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:44:02.180659 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180667 | orchestrator | 2026-03-19 00:44:02.180690 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-19 00:44:02.180699 | orchestrator | Thursday 19 March 2026 00:43:59 +0000 (0:00:00.374) 0:00:44.112 ******** 2026-03-19 00:44:02.180707 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:44:02.180716 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:44:02.180724 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180732 | orchestrator | 2026-03-19 00:44:02.180741 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-19 00:44:02.180749 | orchestrator | Thursday 19 March 2026 00:44:00 +0000 (0:00:00.156) 0:00:44.269 ******** 2026-03-19 00:44:02.180763 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:44:02.180772 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:44:02.180781 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180789 | orchestrator | 2026-03-19 00:44:02.180797 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-19 00:44:02.180806 | orchestrator | Thursday 19 March 2026 00:44:00 +0000 (0:00:00.174) 0:00:44.443 ******** 2026-03-19 00:44:02.180815 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:44:02.180824 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:44:02.180833 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180841 | orchestrator | 2026-03-19 00:44:02.180849 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-19 00:44:02.180856 | orchestrator | Thursday 19 March 2026 00:44:00 +0000 (0:00:00.153) 0:00:44.597 ******** 2026-03-19 00:44:02.180863 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:44:02.180871 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:44:02.180878 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.180885 | orchestrator | 2026-03-19 00:44:02.180892 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-19 00:44:02.180899 | orchestrator | Thursday 19 March 2026 00:44:00 +0000 (0:00:00.163) 0:00:44.760 ******** 2026-03-19 00:44:02.180906 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:44:02.180914 | orchestrator | 2026-03-19 00:44:02.180921 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-19 00:44:02.180928 | orchestrator | Thursday 19 March 2026 00:44:01 +0000 (0:00:00.528) 0:00:45.288 ******** 2026-03-19 00:44:02.180935 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:44:02.180942 | orchestrator | 2026-03-19 00:44:02.180949 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-19 00:44:02.180956 | orchestrator | Thursday 19 March 2026 00:44:01 +0000 (0:00:00.576) 0:00:45.866 ******** 2026-03-19 00:44:02.180964 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:44:02.180971 | orchestrator | 2026-03-19 00:44:02.180978 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-19 00:44:02.180985 | orchestrator | Thursday 19 March 2026 00:44:01 +0000 (0:00:00.148) 0:00:46.014 ******** 2026-03-19 00:44:02.180992 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'vg_name': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'}) 2026-03-19 00:44:02.181001 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'vg_name': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'}) 2026-03-19 00:44:02.181008 | orchestrator | 2026-03-19 00:44:02.181015 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-19 00:44:02.181022 | orchestrator | Thursday 19 March 2026 00:44:01 +0000 (0:00:00.178) 0:00:46.193 ******** 2026-03-19 00:44:02.181029 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:44:02.181073 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:44:02.181087 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:02.181107 | orchestrator | 2026-03-19 00:44:02.181120 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-19 00:44:02.181132 | orchestrator | Thursday 19 March 2026 00:44:02 +0000 (0:00:00.165) 0:00:46.359 ******** 2026-03-19 00:44:02.181143 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:44:02.181163 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:44:08.357535 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:08.357685 | orchestrator | 2026-03-19 00:44:08.357724 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-19 00:44:08.357741 | orchestrator | Thursday 19 March 2026 00:44:02 +0000 (0:00:00.156) 0:00:46.516 ******** 2026-03-19 00:44:08.357754 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'})  2026-03-19 00:44:08.357769 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'})  2026-03-19 00:44:08.357782 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:08.357795 | orchestrator | 2026-03-19 00:44:08.357808 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-19 00:44:08.357820 | orchestrator | Thursday 19 March 2026 00:44:02 +0000 (0:00:00.207) 0:00:46.723 ******** 2026-03-19 00:44:08.357833 | orchestrator | ok: [testbed-node-4] => { 2026-03-19 00:44:08.357845 | orchestrator |  "lvm_report": { 2026-03-19 00:44:08.357871 | orchestrator |  "lv": [ 2026-03-19 00:44:08.357904 | orchestrator |  { 2026-03-19 00:44:08.357919 | orchestrator |  "lv_name": "osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1", 2026-03-19 00:44:08.357933 | orchestrator |  "vg_name": "ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1" 2026-03-19 00:44:08.357946 | orchestrator |  }, 2026-03-19 00:44:08.357957 | orchestrator |  { 2026-03-19 00:44:08.357969 | orchestrator |  "lv_name": "osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c", 2026-03-19 00:44:08.357981 | orchestrator |  "vg_name": "ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c" 2026-03-19 00:44:08.357994 | orchestrator |  } 2026-03-19 00:44:08.358006 | orchestrator |  ], 2026-03-19 00:44:08.358081 | orchestrator |  "pv": [ 2026-03-19 00:44:08.358095 | orchestrator |  { 2026-03-19 00:44:08.358107 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-19 00:44:08.358118 | orchestrator |  "vg_name": "ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c" 2026-03-19 00:44:08.358131 | orchestrator |  }, 2026-03-19 00:44:08.358142 | orchestrator |  { 2026-03-19 00:44:08.358154 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-19 00:44:08.358166 | orchestrator |  "vg_name": "ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1" 2026-03-19 00:44:08.358180 | orchestrator |  } 2026-03-19 00:44:08.358193 | orchestrator |  ] 2026-03-19 00:44:08.358205 | orchestrator |  } 2026-03-19 00:44:08.358218 | orchestrator | } 2026-03-19 00:44:08.358231 | orchestrator | 2026-03-19 00:44:08.358245 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-19 00:44:08.358258 | orchestrator | 2026-03-19 00:44:08.358269 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-19 00:44:08.358282 | orchestrator | Thursday 19 March 2026 00:44:02 +0000 (0:00:00.486) 0:00:47.209 ******** 2026-03-19 00:44:08.358307 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-19 00:44:08.358322 | orchestrator | 2026-03-19 00:44:08.358335 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-19 00:44:08.358349 | orchestrator | Thursday 19 March 2026 00:44:03 +0000 (0:00:00.247) 0:00:47.457 ******** 2026-03-19 00:44:08.358511 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:44:08.358532 | orchestrator | 2026-03-19 00:44:08.358545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:44:08.358558 | orchestrator | Thursday 19 March 2026 00:44:03 +0000 (0:00:00.254) 0:00:47.711 ******** 2026-03-19 00:44:08.358572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-19 00:44:08.358585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-19 00:44:08.358597 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-19 00:44:08.358615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-19 00:44:08.358627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-19 00:44:08.358640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-19 00:44:08.358652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-19 00:44:08.358664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-19 00:44:08.358676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-19 00:44:08.358688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-19 00:44:08.358701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-19 00:44:08.358712 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-19 00:44:08.358724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-19 00:44:08.358736 | orchestrator | 2026-03-19 00:44:08.358747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:44:08.358759 | orchestrator | Thursday 19 March 2026 00:44:03 +0000 (0:00:00.455) 0:00:48.167 ******** 2026-03-19 00:44:08.358771 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:08.358783 | orchestrator | 2026-03-19 00:44:08.358795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:44:08.358808 | orchestrator | Thursday 19 March 2026 00:44:04 +0000 (0:00:00.202) 0:00:48.369 ******** 2026-03-19 00:44:08.358820 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:08.358833 | orchestrator | 2026-03-19 00:44:08.358848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:44:08.358888 | orchestrator | Thursday 19 March 2026 00:44:04 +0000 (0:00:00.192) 0:00:48.562 ******** 2026-03-19 00:44:08.358902 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:08.358914 | orchestrator | 2026-03-19 00:44:08.358927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:44:08.358939 | orchestrator | Thursday 19 March 2026 00:44:04 +0000 (0:00:00.193) 0:00:48.756 ******** 2026-03-19 00:44:08.358950 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:08.358962 | orchestrator | 2026-03-19 00:44:08.358973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:44:08.358985 | orchestrator | Thursday 19 March 2026 00:44:04 +0000 (0:00:00.195) 0:00:48.951 ******** 2026-03-19 00:44:08.358998 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:08.359011 | orchestrator | 2026-03-19 00:44:08.359023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:44:08.359035 | orchestrator | Thursday 19 March 2026 00:44:04 +0000 (0:00:00.191) 0:00:49.143 ******** 2026-03-19 00:44:08.359047 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:08.359059 | orchestrator | 2026-03-19 00:44:08.359071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:44:08.359098 | orchestrator | Thursday 19 March 2026 00:44:05 +0000 (0:00:00.641) 0:00:49.785 ******** 2026-03-19 00:44:08.359110 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:08.359133 | orchestrator | 2026-03-19 00:44:08.359146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:44:08.359158 | orchestrator | Thursday 19 March 2026 00:44:05 +0000 (0:00:00.216) 0:00:50.001 ******** 2026-03-19 00:44:08.359171 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:08.359184 | orchestrator | 2026-03-19 00:44:08.359197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:44:08.359209 | orchestrator | Thursday 19 March 2026 00:44:05 +0000 (0:00:00.189) 0:00:50.191 ******** 2026-03-19 00:44:08.359222 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99) 2026-03-19 00:44:08.359237 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99) 2026-03-19 00:44:08.359250 | orchestrator | 2026-03-19 00:44:08.359262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:44:08.359275 | orchestrator | Thursday 19 March 2026 00:44:06 +0000 (0:00:00.430) 0:00:50.621 ******** 2026-03-19 00:44:08.359288 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2e1c1462-5959-44f4-a623-e25e33d313c5) 2026-03-19 00:44:08.359301 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2e1c1462-5959-44f4-a623-e25e33d313c5) 2026-03-19 00:44:08.359315 | orchestrator | 2026-03-19 00:44:08.359329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:44:08.359341 | orchestrator | Thursday 19 March 2026 00:44:06 +0000 (0:00:00.430) 0:00:51.051 ******** 2026-03-19 00:44:08.359353 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_28009f26-7505-45c2-833e-d396e3f8b400) 2026-03-19 00:44:08.359366 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_28009f26-7505-45c2-833e-d396e3f8b400) 2026-03-19 00:44:08.359377 | orchestrator | 2026-03-19 00:44:08.359414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:44:08.359428 | orchestrator | Thursday 19 March 2026 00:44:07 +0000 (0:00:00.438) 0:00:51.490 ******** 2026-03-19 00:44:08.359440 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e4aaa0c2-0099-489f-8e98-802ea2f51c85) 2026-03-19 00:44:08.359452 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e4aaa0c2-0099-489f-8e98-802ea2f51c85) 2026-03-19 00:44:08.359464 | orchestrator | 2026-03-19 00:44:08.359475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 00:44:08.359487 | orchestrator | Thursday 19 March 2026 00:44:07 +0000 (0:00:00.446) 0:00:51.936 ******** 2026-03-19 00:44:08.359500 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-19 00:44:08.359513 | orchestrator | 2026-03-19 00:44:08.359525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:44:08.359538 | orchestrator | Thursday 19 March 2026 00:44:08 +0000 (0:00:00.337) 0:00:52.273 ******** 2026-03-19 00:44:08.359551 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-19 00:44:08.359564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-19 00:44:08.359577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-19 00:44:08.359591 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-19 00:44:08.359604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-19 00:44:08.359617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-19 00:44:08.359631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-19 00:44:08.359645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-19 00:44:08.359657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-19 00:44:08.359681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-19 00:44:08.359694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-19 00:44:08.359718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-19 00:44:16.749880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-19 00:44:16.749963 | orchestrator | 2026-03-19 00:44:16.749971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:44:16.749976 | orchestrator | Thursday 19 March 2026 00:44:08 +0000 (0:00:00.423) 0:00:52.697 ******** 2026-03-19 00:44:16.749982 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.749987 | orchestrator | 2026-03-19 00:44:16.750003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:44:16.750009 | orchestrator | Thursday 19 March 2026 00:44:08 +0000 (0:00:00.185) 0:00:52.883 ******** 2026-03-19 00:44:16.750049 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750055 | orchestrator | 2026-03-19 00:44:16.750060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:44:16.750065 | orchestrator | Thursday 19 March 2026 00:44:08 +0000 (0:00:00.201) 0:00:53.084 ******** 2026-03-19 00:44:16.750070 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750075 | orchestrator | 2026-03-19 00:44:16.750080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:44:16.750099 | orchestrator | Thursday 19 March 2026 00:44:09 +0000 (0:00:00.677) 0:00:53.762 ******** 2026-03-19 00:44:16.750104 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750109 | orchestrator | 2026-03-19 00:44:16.750113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:44:16.750118 | orchestrator | Thursday 19 March 2026 00:44:09 +0000 (0:00:00.215) 0:00:53.977 ******** 2026-03-19 00:44:16.750122 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750127 | orchestrator | 2026-03-19 00:44:16.750132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:44:16.750136 | orchestrator | Thursday 19 March 2026 00:44:09 +0000 (0:00:00.196) 0:00:54.173 ******** 2026-03-19 00:44:16.750141 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750145 | orchestrator | 2026-03-19 00:44:16.750150 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:44:16.750155 | orchestrator | Thursday 19 March 2026 00:44:10 +0000 (0:00:00.197) 0:00:54.371 ******** 2026-03-19 00:44:16.750159 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750164 | orchestrator | 2026-03-19 00:44:16.750168 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:44:16.750173 | orchestrator | Thursday 19 March 2026 00:44:10 +0000 (0:00:00.200) 0:00:54.571 ******** 2026-03-19 00:44:16.750178 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750182 | orchestrator | 2026-03-19 00:44:16.750187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:44:16.750192 | orchestrator | Thursday 19 March 2026 00:44:10 +0000 (0:00:00.204) 0:00:54.776 ******** 2026-03-19 00:44:16.750197 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-19 00:44:16.750202 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-19 00:44:16.750207 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-19 00:44:16.750212 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-19 00:44:16.750217 | orchestrator | 2026-03-19 00:44:16.750221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:44:16.750226 | orchestrator | Thursday 19 March 2026 00:44:11 +0000 (0:00:00.646) 0:00:55.422 ******** 2026-03-19 00:44:16.750230 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750235 | orchestrator | 2026-03-19 00:44:16.750240 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:44:16.750260 | orchestrator | Thursday 19 March 2026 00:44:11 +0000 (0:00:00.200) 0:00:55.623 ******** 2026-03-19 00:44:16.750264 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750269 | orchestrator | 2026-03-19 00:44:16.750273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:44:16.750278 | orchestrator | Thursday 19 March 2026 00:44:11 +0000 (0:00:00.238) 0:00:55.862 ******** 2026-03-19 00:44:16.750283 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750287 | orchestrator | 2026-03-19 00:44:16.750292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 00:44:16.750296 | orchestrator | Thursday 19 March 2026 00:44:11 +0000 (0:00:00.195) 0:00:56.058 ******** 2026-03-19 00:44:16.750301 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750305 | orchestrator | 2026-03-19 00:44:16.750310 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-19 00:44:16.750314 | orchestrator | Thursday 19 March 2026 00:44:11 +0000 (0:00:00.187) 0:00:56.245 ******** 2026-03-19 00:44:16.750319 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750323 | orchestrator | 2026-03-19 00:44:16.750328 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-19 00:44:16.750332 | orchestrator | Thursday 19 March 2026 00:44:12 +0000 (0:00:00.272) 0:00:56.517 ******** 2026-03-19 00:44:16.750337 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f7952abd-f19d-5f54-b846-7c46d615b8fb'}}) 2026-03-19 00:44:16.750342 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '056512d9-3a02-5302-afc2-fa0158449af3'}}) 2026-03-19 00:44:16.750347 | orchestrator | 2026-03-19 00:44:16.750351 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-19 00:44:16.750356 | orchestrator | Thursday 19 March 2026 00:44:12 +0000 (0:00:00.175) 0:00:56.693 ******** 2026-03-19 00:44:16.750361 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'}) 2026-03-19 00:44:16.750366 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'}) 2026-03-19 00:44:16.750371 | orchestrator | 2026-03-19 00:44:16.750376 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-19 00:44:16.750421 | orchestrator | Thursday 19 March 2026 00:44:14 +0000 (0:00:01.801) 0:00:58.495 ******** 2026-03-19 00:44:16.750427 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:16.750433 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:16.750438 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750443 | orchestrator | 2026-03-19 00:44:16.750448 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-19 00:44:16.750454 | orchestrator | Thursday 19 March 2026 00:44:14 +0000 (0:00:00.138) 0:00:58.633 ******** 2026-03-19 00:44:16.750459 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'}) 2026-03-19 00:44:16.750464 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'}) 2026-03-19 00:44:16.750469 | orchestrator | 2026-03-19 00:44:16.750475 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-19 00:44:16.750480 | orchestrator | Thursday 19 March 2026 00:44:15 +0000 (0:00:01.255) 0:00:59.888 ******** 2026-03-19 00:44:16.750485 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:16.750495 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:16.750500 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750505 | orchestrator | 2026-03-19 00:44:16.750510 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-19 00:44:16.750515 | orchestrator | Thursday 19 March 2026 00:44:15 +0000 (0:00:00.130) 0:01:00.019 ******** 2026-03-19 00:44:16.750520 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750526 | orchestrator | 2026-03-19 00:44:16.750531 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-19 00:44:16.750536 | orchestrator | Thursday 19 March 2026 00:44:15 +0000 (0:00:00.123) 0:01:00.143 ******** 2026-03-19 00:44:16.750541 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:16.750546 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:16.750552 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750557 | orchestrator | 2026-03-19 00:44:16.750562 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-19 00:44:16.750567 | orchestrator | Thursday 19 March 2026 00:44:16 +0000 (0:00:00.142) 0:01:00.285 ******** 2026-03-19 00:44:16.750572 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750578 | orchestrator | 2026-03-19 00:44:16.750583 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-19 00:44:16.750593 | orchestrator | Thursday 19 March 2026 00:44:16 +0000 (0:00:00.118) 0:01:00.404 ******** 2026-03-19 00:44:16.750598 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:16.750603 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:16.750609 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750614 | orchestrator | 2026-03-19 00:44:16.750619 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-19 00:44:16.750624 | orchestrator | Thursday 19 March 2026 00:44:16 +0000 (0:00:00.140) 0:01:00.544 ******** 2026-03-19 00:44:16.750629 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750634 | orchestrator | 2026-03-19 00:44:16.750639 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-19 00:44:16.750643 | orchestrator | Thursday 19 March 2026 00:44:16 +0000 (0:00:00.117) 0:01:00.662 ******** 2026-03-19 00:44:16.750648 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:16.750652 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:16.750657 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:16.750662 | orchestrator | 2026-03-19 00:44:16.750666 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-19 00:44:16.750671 | orchestrator | Thursday 19 March 2026 00:44:16 +0000 (0:00:00.144) 0:01:00.806 ******** 2026-03-19 00:44:16.750675 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:44:16.750680 | orchestrator | 2026-03-19 00:44:16.750685 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-19 00:44:16.750689 | orchestrator | Thursday 19 March 2026 00:44:16 +0000 (0:00:00.135) 0:01:00.942 ******** 2026-03-19 00:44:16.750698 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:22.779554 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:22.779639 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.779645 | orchestrator | 2026-03-19 00:44:22.779650 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-19 00:44:22.779656 | orchestrator | Thursday 19 March 2026 00:44:16 +0000 (0:00:00.283) 0:01:01.225 ******** 2026-03-19 00:44:22.779660 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:22.779665 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:22.779669 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.779673 | orchestrator | 2026-03-19 00:44:22.779692 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-19 00:44:22.779696 | orchestrator | Thursday 19 March 2026 00:44:17 +0000 (0:00:00.154) 0:01:01.380 ******** 2026-03-19 00:44:22.779700 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:22.779704 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:22.779707 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.779711 | orchestrator | 2026-03-19 00:44:22.779715 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-19 00:44:22.779719 | orchestrator | Thursday 19 March 2026 00:44:17 +0000 (0:00:00.144) 0:01:01.524 ******** 2026-03-19 00:44:22.779722 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.779726 | orchestrator | 2026-03-19 00:44:22.779730 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-19 00:44:22.779734 | orchestrator | Thursday 19 March 2026 00:44:17 +0000 (0:00:00.099) 0:01:01.624 ******** 2026-03-19 00:44:22.779737 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.779741 | orchestrator | 2026-03-19 00:44:22.779745 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-19 00:44:22.779749 | orchestrator | Thursday 19 March 2026 00:44:17 +0000 (0:00:00.119) 0:01:01.744 ******** 2026-03-19 00:44:22.779752 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.779757 | orchestrator | 2026-03-19 00:44:22.779761 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-19 00:44:22.779764 | orchestrator | Thursday 19 March 2026 00:44:17 +0000 (0:00:00.138) 0:01:01.882 ******** 2026-03-19 00:44:22.779768 | orchestrator | ok: [testbed-node-5] => { 2026-03-19 00:44:22.779773 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-19 00:44:22.779777 | orchestrator | } 2026-03-19 00:44:22.779780 | orchestrator | 2026-03-19 00:44:22.779784 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-19 00:44:22.779788 | orchestrator | Thursday 19 March 2026 00:44:17 +0000 (0:00:00.119) 0:01:02.002 ******** 2026-03-19 00:44:22.779792 | orchestrator | ok: [testbed-node-5] => { 2026-03-19 00:44:22.779796 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-19 00:44:22.779799 | orchestrator | } 2026-03-19 00:44:22.779803 | orchestrator | 2026-03-19 00:44:22.779807 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-19 00:44:22.779811 | orchestrator | Thursday 19 March 2026 00:44:17 +0000 (0:00:00.118) 0:01:02.120 ******** 2026-03-19 00:44:22.779814 | orchestrator | ok: [testbed-node-5] => { 2026-03-19 00:44:22.779818 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-19 00:44:22.779822 | orchestrator | } 2026-03-19 00:44:22.779826 | orchestrator | 2026-03-19 00:44:22.779829 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-19 00:44:22.779833 | orchestrator | Thursday 19 March 2026 00:44:17 +0000 (0:00:00.106) 0:01:02.227 ******** 2026-03-19 00:44:22.779852 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:44:22.779856 | orchestrator | 2026-03-19 00:44:22.779860 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-19 00:44:22.779864 | orchestrator | Thursday 19 March 2026 00:44:18 +0000 (0:00:00.476) 0:01:02.704 ******** 2026-03-19 00:44:22.779868 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:44:22.779871 | orchestrator | 2026-03-19 00:44:22.779875 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-19 00:44:22.779879 | orchestrator | Thursday 19 March 2026 00:44:18 +0000 (0:00:00.544) 0:01:03.249 ******** 2026-03-19 00:44:22.779883 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:44:22.779886 | orchestrator | 2026-03-19 00:44:22.779890 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-19 00:44:22.779894 | orchestrator | Thursday 19 March 2026 00:44:19 +0000 (0:00:00.532) 0:01:03.781 ******** 2026-03-19 00:44:22.779898 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:44:22.779901 | orchestrator | 2026-03-19 00:44:22.779905 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-19 00:44:22.779909 | orchestrator | Thursday 19 March 2026 00:44:19 +0000 (0:00:00.342) 0:01:04.123 ******** 2026-03-19 00:44:22.779912 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.779916 | orchestrator | 2026-03-19 00:44:22.779920 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-19 00:44:22.779924 | orchestrator | Thursday 19 March 2026 00:44:19 +0000 (0:00:00.110) 0:01:04.234 ******** 2026-03-19 00:44:22.779927 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.779931 | orchestrator | 2026-03-19 00:44:22.779935 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-19 00:44:22.779938 | orchestrator | Thursday 19 March 2026 00:44:20 +0000 (0:00:00.110) 0:01:04.344 ******** 2026-03-19 00:44:22.779942 | orchestrator | ok: [testbed-node-5] => { 2026-03-19 00:44:22.779946 | orchestrator |  "vgs_report": { 2026-03-19 00:44:22.779950 | orchestrator |  "vg": [] 2026-03-19 00:44:22.779965 | orchestrator |  } 2026-03-19 00:44:22.779969 | orchestrator | } 2026-03-19 00:44:22.779973 | orchestrator | 2026-03-19 00:44:22.779977 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-19 00:44:22.779981 | orchestrator | Thursday 19 March 2026 00:44:20 +0000 (0:00:00.146) 0:01:04.491 ******** 2026-03-19 00:44:22.779985 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.779988 | orchestrator | 2026-03-19 00:44:22.779992 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-19 00:44:22.779996 | orchestrator | Thursday 19 March 2026 00:44:20 +0000 (0:00:00.150) 0:01:04.641 ******** 2026-03-19 00:44:22.780000 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.780003 | orchestrator | 2026-03-19 00:44:22.780007 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-19 00:44:22.780011 | orchestrator | Thursday 19 March 2026 00:44:20 +0000 (0:00:00.133) 0:01:04.774 ******** 2026-03-19 00:44:22.780015 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.780018 | orchestrator | 2026-03-19 00:44:22.780022 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-19 00:44:22.780029 | orchestrator | Thursday 19 March 2026 00:44:20 +0000 (0:00:00.136) 0:01:04.910 ******** 2026-03-19 00:44:22.780033 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.780037 | orchestrator | 2026-03-19 00:44:22.780040 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-19 00:44:22.780044 | orchestrator | Thursday 19 March 2026 00:44:20 +0000 (0:00:00.169) 0:01:05.080 ******** 2026-03-19 00:44:22.780048 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.780052 | orchestrator | 2026-03-19 00:44:22.780055 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-19 00:44:22.780059 | orchestrator | Thursday 19 March 2026 00:44:20 +0000 (0:00:00.133) 0:01:05.214 ******** 2026-03-19 00:44:22.780063 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.780070 | orchestrator | 2026-03-19 00:44:22.780074 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-19 00:44:22.780077 | orchestrator | Thursday 19 March 2026 00:44:21 +0000 (0:00:00.131) 0:01:05.345 ******** 2026-03-19 00:44:22.780081 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.780085 | orchestrator | 2026-03-19 00:44:22.780089 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-19 00:44:22.780092 | orchestrator | Thursday 19 March 2026 00:44:21 +0000 (0:00:00.152) 0:01:05.498 ******** 2026-03-19 00:44:22.780096 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.780100 | orchestrator | 2026-03-19 00:44:22.780104 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-19 00:44:22.780108 | orchestrator | Thursday 19 March 2026 00:44:21 +0000 (0:00:00.134) 0:01:05.632 ******** 2026-03-19 00:44:22.780113 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.780117 | orchestrator | 2026-03-19 00:44:22.780121 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-19 00:44:22.780126 | orchestrator | Thursday 19 March 2026 00:44:21 +0000 (0:00:00.329) 0:01:05.962 ******** 2026-03-19 00:44:22.780130 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.780134 | orchestrator | 2026-03-19 00:44:22.780138 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-19 00:44:22.780143 | orchestrator | Thursday 19 March 2026 00:44:21 +0000 (0:00:00.143) 0:01:06.106 ******** 2026-03-19 00:44:22.780147 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.780151 | orchestrator | 2026-03-19 00:44:22.780155 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-19 00:44:22.780160 | orchestrator | Thursday 19 March 2026 00:44:21 +0000 (0:00:00.135) 0:01:06.242 ******** 2026-03-19 00:44:22.780164 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.780169 | orchestrator | 2026-03-19 00:44:22.780173 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-19 00:44:22.780177 | orchestrator | Thursday 19 March 2026 00:44:22 +0000 (0:00:00.178) 0:01:06.420 ******** 2026-03-19 00:44:22.780181 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.780186 | orchestrator | 2026-03-19 00:44:22.780190 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-19 00:44:22.780195 | orchestrator | Thursday 19 March 2026 00:44:22 +0000 (0:00:00.130) 0:01:06.551 ******** 2026-03-19 00:44:22.780199 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.780203 | orchestrator | 2026-03-19 00:44:22.780207 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-19 00:44:22.780211 | orchestrator | Thursday 19 March 2026 00:44:22 +0000 (0:00:00.155) 0:01:06.707 ******** 2026-03-19 00:44:22.780216 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:22.780220 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:22.780224 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.780228 | orchestrator | 2026-03-19 00:44:22.780232 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-19 00:44:22.780235 | orchestrator | Thursday 19 March 2026 00:44:22 +0000 (0:00:00.139) 0:01:06.846 ******** 2026-03-19 00:44:22.780239 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:22.780243 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:22.780247 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:22.780251 | orchestrator | 2026-03-19 00:44:22.780254 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-19 00:44:22.780262 | orchestrator | Thursday 19 March 2026 00:44:22 +0000 (0:00:00.127) 0:01:06.974 ******** 2026-03-19 00:44:22.780269 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:25.616195 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:25.616297 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:25.616311 | orchestrator | 2026-03-19 00:44:25.616319 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-19 00:44:25.616328 | orchestrator | Thursday 19 March 2026 00:44:22 +0000 (0:00:00.146) 0:01:07.120 ******** 2026-03-19 00:44:25.616335 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:25.616362 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:25.616370 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:25.616378 | orchestrator | 2026-03-19 00:44:25.616384 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-19 00:44:25.616455 | orchestrator | Thursday 19 March 2026 00:44:22 +0000 (0:00:00.125) 0:01:07.246 ******** 2026-03-19 00:44:25.616463 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:25.616471 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:25.616478 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:25.616485 | orchestrator | 2026-03-19 00:44:25.616492 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-19 00:44:25.616499 | orchestrator | Thursday 19 March 2026 00:44:23 +0000 (0:00:00.140) 0:01:07.387 ******** 2026-03-19 00:44:25.616505 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:25.616512 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:25.616519 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:25.616526 | orchestrator | 2026-03-19 00:44:25.616532 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-19 00:44:25.616538 | orchestrator | Thursday 19 March 2026 00:44:23 +0000 (0:00:00.129) 0:01:07.516 ******** 2026-03-19 00:44:25.616545 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:25.616551 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:25.616558 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:25.616565 | orchestrator | 2026-03-19 00:44:25.616572 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-19 00:44:25.616579 | orchestrator | Thursday 19 March 2026 00:44:23 +0000 (0:00:00.274) 0:01:07.791 ******** 2026-03-19 00:44:25.616586 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:25.616594 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:25.616600 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:25.616633 | orchestrator | 2026-03-19 00:44:25.616639 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-19 00:44:25.616646 | orchestrator | Thursday 19 March 2026 00:44:23 +0000 (0:00:00.167) 0:01:07.958 ******** 2026-03-19 00:44:25.616653 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:44:25.616661 | orchestrator | 2026-03-19 00:44:25.616668 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-19 00:44:25.616674 | orchestrator | Thursday 19 March 2026 00:44:24 +0000 (0:00:00.555) 0:01:08.513 ******** 2026-03-19 00:44:25.616681 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:44:25.616687 | orchestrator | 2026-03-19 00:44:25.616693 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-19 00:44:25.616697 | orchestrator | Thursday 19 March 2026 00:44:24 +0000 (0:00:00.489) 0:01:09.003 ******** 2026-03-19 00:44:25.616700 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:44:25.616704 | orchestrator | 2026-03-19 00:44:25.616708 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-19 00:44:25.616712 | orchestrator | Thursday 19 March 2026 00:44:24 +0000 (0:00:00.120) 0:01:09.123 ******** 2026-03-19 00:44:25.616716 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'vg_name': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'}) 2026-03-19 00:44:25.616722 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'vg_name': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'}) 2026-03-19 00:44:25.616725 | orchestrator | 2026-03-19 00:44:25.616729 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-19 00:44:25.616733 | orchestrator | Thursday 19 March 2026 00:44:25 +0000 (0:00:00.165) 0:01:09.289 ******** 2026-03-19 00:44:25.616753 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:25.616760 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:25.616766 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:25.616772 | orchestrator | 2026-03-19 00:44:25.616779 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-19 00:44:25.616785 | orchestrator | Thursday 19 March 2026 00:44:25 +0000 (0:00:00.142) 0:01:09.431 ******** 2026-03-19 00:44:25.616792 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:25.616799 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:25.616805 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:25.616812 | orchestrator | 2026-03-19 00:44:25.616818 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-19 00:44:25.616825 | orchestrator | Thursday 19 March 2026 00:44:25 +0000 (0:00:00.150) 0:01:09.581 ******** 2026-03-19 00:44:25.616832 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'})  2026-03-19 00:44:25.616838 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'})  2026-03-19 00:44:25.616844 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:25.616851 | orchestrator | 2026-03-19 00:44:25.616857 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-19 00:44:25.616863 | orchestrator | Thursday 19 March 2026 00:44:25 +0000 (0:00:00.146) 0:01:09.727 ******** 2026-03-19 00:44:25.616870 | orchestrator | ok: [testbed-node-5] => { 2026-03-19 00:44:25.616876 | orchestrator |  "lvm_report": { 2026-03-19 00:44:25.616883 | orchestrator |  "lv": [ 2026-03-19 00:44:25.616896 | orchestrator |  { 2026-03-19 00:44:25.616902 | orchestrator |  "lv_name": "osd-block-056512d9-3a02-5302-afc2-fa0158449af3", 2026-03-19 00:44:25.616907 | orchestrator |  "vg_name": "ceph-056512d9-3a02-5302-afc2-fa0158449af3" 2026-03-19 00:44:25.616912 | orchestrator |  }, 2026-03-19 00:44:25.616916 | orchestrator |  { 2026-03-19 00:44:25.616921 | orchestrator |  "lv_name": "osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb", 2026-03-19 00:44:25.616924 | orchestrator |  "vg_name": "ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb" 2026-03-19 00:44:25.616928 | orchestrator |  } 2026-03-19 00:44:25.616932 | orchestrator |  ], 2026-03-19 00:44:25.616935 | orchestrator |  "pv": [ 2026-03-19 00:44:25.616939 | orchestrator |  { 2026-03-19 00:44:25.616943 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-19 00:44:25.616947 | orchestrator |  "vg_name": "ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb" 2026-03-19 00:44:25.616950 | orchestrator |  }, 2026-03-19 00:44:25.616954 | orchestrator |  { 2026-03-19 00:44:25.616958 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-19 00:44:25.616961 | orchestrator |  "vg_name": "ceph-056512d9-3a02-5302-afc2-fa0158449af3" 2026-03-19 00:44:25.616965 | orchestrator |  } 2026-03-19 00:44:25.616969 | orchestrator |  ] 2026-03-19 00:44:25.616972 | orchestrator |  } 2026-03-19 00:44:25.616976 | orchestrator | } 2026-03-19 00:44:25.616980 | orchestrator | 2026-03-19 00:44:25.616984 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:44:25.616988 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-19 00:44:25.616991 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-19 00:44:25.616996 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-19 00:44:25.616999 | orchestrator | 2026-03-19 00:44:25.617003 | orchestrator | 2026-03-19 00:44:25.617007 | orchestrator | 2026-03-19 00:44:25.617018 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:44:25.617022 | orchestrator | Thursday 19 March 2026 00:44:25 +0000 (0:00:00.135) 0:01:09.862 ******** 2026-03-19 00:44:25.617025 | orchestrator | =============================================================================== 2026-03-19 00:44:25.617029 | orchestrator | Create block VGs -------------------------------------------------------- 5.57s 2026-03-19 00:44:25.617033 | orchestrator | Create block LVs -------------------------------------------------------- 3.96s 2026-03-19 00:44:25.617036 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.76s 2026-03-19 00:44:25.617040 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.60s 2026-03-19 00:44:25.617044 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.59s 2026-03-19 00:44:25.617047 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.55s 2026-03-19 00:44:25.617051 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.55s 2026-03-19 00:44:25.617055 | orchestrator | Add known partitions to the list of available block devices ------------- 1.48s 2026-03-19 00:44:25.617062 | orchestrator | Add known links to the list of available block devices ------------------ 1.19s 2026-03-19 00:44:25.964076 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2026-03-19 00:44:25.964185 | orchestrator | Print LVM report data --------------------------------------------------- 0.87s 2026-03-19 00:44:25.964205 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2026-03-19 00:44:25.965033 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2026-03-19 00:44:25.965065 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.70s 2026-03-19 00:44:25.965100 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-03-19 00:44:25.965108 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-03-19 00:44:25.965131 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-03-19 00:44:25.965140 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-03-19 00:44:25.965147 | orchestrator | Get initial list of available block devices ----------------------------- 0.66s 2026-03-19 00:44:25.965154 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-03-19 00:44:37.504021 | orchestrator | 2026-03-19 00:44:37 | INFO  | Prepare task for execution of facts. 2026-03-19 00:44:37.563739 | orchestrator | 2026-03-19 00:44:37 | INFO  | Task 7196a9bb-953c-443b-a048-d089d278abbd (facts) was prepared for execution. 2026-03-19 00:44:37.563841 | orchestrator | 2026-03-19 00:44:37 | INFO  | It takes a moment until task 7196a9bb-953c-443b-a048-d089d278abbd (facts) has been started and output is visible here. 2026-03-19 00:44:49.506835 | orchestrator | 2026-03-19 00:44:49.506968 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-19 00:44:49.506985 | orchestrator | 2026-03-19 00:44:49.506998 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-19 00:44:49.507010 | orchestrator | Thursday 19 March 2026 00:44:40 +0000 (0:00:00.299) 0:00:00.299 ******** 2026-03-19 00:44:49.507022 | orchestrator | ok: [testbed-manager] 2026-03-19 00:44:49.507033 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:44:49.507044 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:44:49.507055 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:44:49.507066 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:44:49.507077 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:44:49.507087 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:44:49.507098 | orchestrator | 2026-03-19 00:44:49.507109 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-19 00:44:49.507120 | orchestrator | Thursday 19 March 2026 00:44:41 +0000 (0:00:01.215) 0:00:01.515 ******** 2026-03-19 00:44:49.507131 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:44:49.507143 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:44:49.507154 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:44:49.507164 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:44:49.507175 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:44:49.507186 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:49.507197 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:49.507207 | orchestrator | 2026-03-19 00:44:49.507218 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-19 00:44:49.507229 | orchestrator | 2026-03-19 00:44:49.507240 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-19 00:44:49.507251 | orchestrator | Thursday 19 March 2026 00:44:43 +0000 (0:00:01.101) 0:00:02.616 ******** 2026-03-19 00:44:49.507262 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:44:49.507272 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:44:49.507283 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:44:49.507294 | orchestrator | ok: [testbed-manager] 2026-03-19 00:44:49.507305 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:44:49.507316 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:44:49.507326 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:44:49.507337 | orchestrator | 2026-03-19 00:44:49.507348 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-19 00:44:49.507360 | orchestrator | 2026-03-19 00:44:49.507373 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-19 00:44:49.507385 | orchestrator | Thursday 19 March 2026 00:44:48 +0000 (0:00:05.587) 0:00:08.204 ******** 2026-03-19 00:44:49.507440 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:44:49.507453 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:44:49.507497 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:44:49.507510 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:44:49.507522 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:44:49.507533 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:44:49.507545 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:44:49.507557 | orchestrator | 2026-03-19 00:44:49.507569 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:44:49.507581 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:44:49.507595 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:44:49.507607 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:44:49.507619 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:44:49.507632 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:44:49.507644 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:44:49.507656 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:44:49.507667 | orchestrator | 2026-03-19 00:44:49.507679 | orchestrator | 2026-03-19 00:44:49.507692 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:44:49.507704 | orchestrator | Thursday 19 March 2026 00:44:49 +0000 (0:00:00.504) 0:00:08.709 ******** 2026-03-19 00:44:49.507716 | orchestrator | =============================================================================== 2026-03-19 00:44:49.507728 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.59s 2026-03-19 00:44:49.507740 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.22s 2026-03-19 00:44:49.507767 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2026-03-19 00:44:49.507778 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-03-19 00:45:00.926001 | orchestrator | 2026-03-19 00:45:00 | INFO  | Prepare task for execution of frr. 2026-03-19 00:45:01.000876 | orchestrator | 2026-03-19 00:45:00 | INFO  | Task 42dc6bfe-5915-46ea-975a-7b6f4395c61c (frr) was prepared for execution. 2026-03-19 00:45:01.000979 | orchestrator | 2026-03-19 00:45:00 | INFO  | It takes a moment until task 42dc6bfe-5915-46ea-975a-7b6f4395c61c (frr) has been started and output is visible here. 2026-03-19 00:45:25.148522 | orchestrator | 2026-03-19 00:45:25.148632 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-19 00:45:25.148646 | orchestrator | 2026-03-19 00:45:25.148657 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-19 00:45:25.148666 | orchestrator | Thursday 19 March 2026 00:45:04 +0000 (0:00:00.299) 0:00:00.299 ******** 2026-03-19 00:45:25.148676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 00:45:25.148686 | orchestrator | 2026-03-19 00:45:25.148695 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-19 00:45:25.148704 | orchestrator | Thursday 19 March 2026 00:45:04 +0000 (0:00:00.217) 0:00:00.516 ******** 2026-03-19 00:45:25.148713 | orchestrator | changed: [testbed-manager] 2026-03-19 00:45:25.148722 | orchestrator | 2026-03-19 00:45:25.148731 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-19 00:45:25.148764 | orchestrator | Thursday 19 March 2026 00:45:05 +0000 (0:00:01.514) 0:00:02.031 ******** 2026-03-19 00:45:25.148774 | orchestrator | changed: [testbed-manager] 2026-03-19 00:45:25.148782 | orchestrator | 2026-03-19 00:45:25.148791 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-19 00:45:25.148800 | orchestrator | Thursday 19 March 2026 00:45:15 +0000 (0:00:09.629) 0:00:11.661 ******** 2026-03-19 00:45:25.148809 | orchestrator | ok: [testbed-manager] 2026-03-19 00:45:25.148818 | orchestrator | 2026-03-19 00:45:25.148828 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-19 00:45:25.148837 | orchestrator | Thursday 19 March 2026 00:45:16 +0000 (0:00:01.032) 0:00:12.693 ******** 2026-03-19 00:45:25.148846 | orchestrator | changed: [testbed-manager] 2026-03-19 00:45:25.148854 | orchestrator | 2026-03-19 00:45:25.148863 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-19 00:45:25.148872 | orchestrator | Thursday 19 March 2026 00:45:17 +0000 (0:00:00.951) 0:00:13.645 ******** 2026-03-19 00:45:25.148880 | orchestrator | ok: [testbed-manager] 2026-03-19 00:45:25.148889 | orchestrator | 2026-03-19 00:45:25.148898 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-19 00:45:25.148906 | orchestrator | Thursday 19 March 2026 00:45:18 +0000 (0:00:01.231) 0:00:14.877 ******** 2026-03-19 00:45:25.148915 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:45:25.148924 | orchestrator | 2026-03-19 00:45:25.148932 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-19 00:45:25.148941 | orchestrator | Thursday 19 March 2026 00:45:18 +0000 (0:00:00.150) 0:00:15.027 ******** 2026-03-19 00:45:25.148950 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:45:25.148958 | orchestrator | 2026-03-19 00:45:25.148967 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-19 00:45:25.148976 | orchestrator | Thursday 19 March 2026 00:45:19 +0000 (0:00:00.260) 0:00:15.288 ******** 2026-03-19 00:45:25.148984 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:45:25.148993 | orchestrator | 2026-03-19 00:45:25.149001 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-19 00:45:25.149012 | orchestrator | Thursday 19 March 2026 00:45:19 +0000 (0:00:00.154) 0:00:15.442 ******** 2026-03-19 00:45:25.149022 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:45:25.149031 | orchestrator | 2026-03-19 00:45:25.149042 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-19 00:45:25.149052 | orchestrator | Thursday 19 March 2026 00:45:19 +0000 (0:00:00.131) 0:00:15.574 ******** 2026-03-19 00:45:25.149062 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:45:25.149072 | orchestrator | 2026-03-19 00:45:25.149081 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-19 00:45:25.149091 | orchestrator | Thursday 19 March 2026 00:45:19 +0000 (0:00:00.155) 0:00:15.729 ******** 2026-03-19 00:45:25.149100 | orchestrator | changed: [testbed-manager] 2026-03-19 00:45:25.149110 | orchestrator | 2026-03-19 00:45:25.149120 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-19 00:45:25.149131 | orchestrator | Thursday 19 March 2026 00:45:20 +0000 (0:00:00.916) 0:00:16.646 ******** 2026-03-19 00:45:25.149142 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-19 00:45:25.149157 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-19 00:45:25.149173 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-19 00:45:25.149187 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-19 00:45:25.149200 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-19 00:45:25.149215 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-19 00:45:25.149242 | orchestrator | 2026-03-19 00:45:25.149258 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-19 00:45:25.149292 | orchestrator | Thursday 19 March 2026 00:45:22 +0000 (0:00:01.984) 0:00:18.631 ******** 2026-03-19 00:45:25.149308 | orchestrator | ok: [testbed-manager] 2026-03-19 00:45:25.149321 | orchestrator | 2026-03-19 00:45:25.149330 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-19 00:45:25.149339 | orchestrator | Thursday 19 March 2026 00:45:23 +0000 (0:00:01.102) 0:00:19.733 ******** 2026-03-19 00:45:25.149347 | orchestrator | changed: [testbed-manager] 2026-03-19 00:45:25.149356 | orchestrator | 2026-03-19 00:45:25.149365 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:45:25.149374 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 00:45:25.149405 | orchestrator | 2026-03-19 00:45:25.149415 | orchestrator | 2026-03-19 00:45:25.149440 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:45:25.149449 | orchestrator | Thursday 19 March 2026 00:45:24 +0000 (0:00:01.277) 0:00:21.011 ******** 2026-03-19 00:45:25.149458 | orchestrator | =============================================================================== 2026-03-19 00:45:25.149467 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.63s 2026-03-19 00:45:25.149475 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 1.98s 2026-03-19 00:45:25.149484 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.51s 2026-03-19 00:45:25.149493 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.28s 2026-03-19 00:45:25.149502 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.23s 2026-03-19 00:45:25.149510 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.10s 2026-03-19 00:45:25.149519 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.03s 2026-03-19 00:45:25.149527 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.95s 2026-03-19 00:45:25.149536 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.92s 2026-03-19 00:45:25.149548 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.26s 2026-03-19 00:45:25.149564 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-03-19 00:45:25.149579 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-03-19 00:45:25.149595 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.15s 2026-03-19 00:45:25.149611 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-03-19 00:45:25.149626 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-03-19 00:45:25.269355 | orchestrator | 2026-03-19 00:45:25.272463 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Mar 19 00:45:25 UTC 2026 2026-03-19 00:45:25.272517 | orchestrator | 2026-03-19 00:45:26.266859 | orchestrator | 2026-03-19 00:45:26 | INFO  | Collection nutshell is prepared for execution 2026-03-19 00:45:26.365954 | orchestrator | 2026-03-19 00:45:26 | INFO  | A [0] - dotfiles 2026-03-19 00:45:36.503015 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [0] - homer 2026-03-19 00:45:36.503122 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [0] - netdata 2026-03-19 00:45:36.503136 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [0] - openstackclient 2026-03-19 00:45:36.503147 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [0] - phpmyadmin 2026-03-19 00:45:36.503157 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [0] - common 2026-03-19 00:45:36.506833 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [1] -- loadbalancer 2026-03-19 00:45:36.506904 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [2] --- opensearch 2026-03-19 00:45:36.506952 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [2] --- mariadb-ng 2026-03-19 00:45:36.507557 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [3] ---- horizon 2026-03-19 00:45:36.507598 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [3] ---- keystone 2026-03-19 00:45:36.507626 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [4] ----- neutron 2026-03-19 00:45:36.507834 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [5] ------ wait-for-nova 2026-03-19 00:45:36.508268 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [6] ------- octavia 2026-03-19 00:45:36.509684 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [4] ----- barbican 2026-03-19 00:45:36.509749 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [4] ----- designate 2026-03-19 00:45:36.509764 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [4] ----- ironic 2026-03-19 00:45:36.509770 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [4] ----- placement 2026-03-19 00:45:36.510304 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [4] ----- magnum 2026-03-19 00:45:36.511728 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [1] -- openvswitch 2026-03-19 00:45:36.511848 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [2] --- ovn 2026-03-19 00:45:36.512122 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [1] -- memcached 2026-03-19 00:45:36.512357 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [1] -- redis 2026-03-19 00:45:36.512371 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [1] -- rabbitmq-ng 2026-03-19 00:45:36.512770 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [0] - kubernetes 2026-03-19 00:45:36.515102 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [1] -- kubeconfig 2026-03-19 00:45:36.515219 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [1] -- copy-kubeconfig 2026-03-19 00:45:36.515410 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [0] - ceph 2026-03-19 00:45:36.517640 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [1] -- ceph-pools 2026-03-19 00:45:36.517745 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [2] --- copy-ceph-keys 2026-03-19 00:45:36.517755 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [3] ---- cephclient 2026-03-19 00:45:36.517767 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-19 00:45:36.517926 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [4] ----- wait-for-keystone 2026-03-19 00:45:36.518080 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-19 00:45:36.518278 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [5] ------ glance 2026-03-19 00:45:36.518676 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [5] ------ cinder 2026-03-19 00:45:36.518701 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [5] ------ nova 2026-03-19 00:45:36.518941 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [4] ----- prometheus 2026-03-19 00:45:36.518952 | orchestrator | 2026-03-19 00:45:36 | INFO  | A [5] ------ grafana 2026-03-19 00:45:36.690850 | orchestrator | 2026-03-19 00:45:36 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-19 00:45:36.690953 | orchestrator | 2026-03-19 00:45:36 | INFO  | Tasks are running in the background 2026-03-19 00:45:38.403519 | orchestrator | 2026-03-19 00:45:38 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-19 00:45:40.596768 | orchestrator | 2026-03-19 00:45:40 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:45:40.600204 | orchestrator | 2026-03-19 00:45:40 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:45:40.600826 | orchestrator | 2026-03-19 00:45:40 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:45:40.601760 | orchestrator | 2026-03-19 00:45:40 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:45:40.602600 | orchestrator | 2026-03-19 00:45:40 | INFO  | Task 8d5e9b4a-b322-4e50-a07c-c105c1a6728f is in state STARTED 2026-03-19 00:45:40.603516 | orchestrator | 2026-03-19 00:45:40 | INFO  | Task 62f77d09-7745-4ee2-a6c0-809f0e224058 is in state STARTED 2026-03-19 00:45:40.604042 | orchestrator | 2026-03-19 00:45:40 | INFO  | Task 2d3399fa-aae9-44f7-83f0-3734637d195d is in state STARTED 2026-03-19 00:45:40.604663 | orchestrator | 2026-03-19 00:45:40 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:45:40.604698 | orchestrator | 2026-03-19 00:45:40 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:45:43.640285 | orchestrator | 2026-03-19 00:45:43 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:45:43.644624 | orchestrator | 2026-03-19 00:45:43 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:45:43.645357 | orchestrator | 2026-03-19 00:45:43 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:45:43.645928 | orchestrator | 2026-03-19 00:45:43 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:45:43.646764 | orchestrator | 2026-03-19 00:45:43 | INFO  | Task 8d5e9b4a-b322-4e50-a07c-c105c1a6728f is in state STARTED 2026-03-19 00:45:43.647074 | orchestrator | 2026-03-19 00:45:43 | INFO  | Task 62f77d09-7745-4ee2-a6c0-809f0e224058 is in state SUCCESS 2026-03-19 00:45:43.647825 | orchestrator | 2026-03-19 00:45:43 | INFO  | Task 2d3399fa-aae9-44f7-83f0-3734637d195d is in state STARTED 2026-03-19 00:45:43.652733 | orchestrator | 2026-03-19 00:45:43 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:45:43.652806 | orchestrator | 2026-03-19 00:45:43 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:45:46.691476 | orchestrator | 2026-03-19 00:45:46 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:45:46.691587 | orchestrator | 2026-03-19 00:45:46 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:45:46.691912 | orchestrator | 2026-03-19 00:45:46 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:45:46.692740 | orchestrator | 2026-03-19 00:45:46 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:45:46.696358 | orchestrator | 2026-03-19 00:45:46 | INFO  | Task 8d5e9b4a-b322-4e50-a07c-c105c1a6728f is in state STARTED 2026-03-19 00:45:46.696968 | orchestrator | 2026-03-19 00:45:46 | INFO  | Task 2d3399fa-aae9-44f7-83f0-3734637d195d is in state STARTED 2026-03-19 00:45:46.697527 | orchestrator | 2026-03-19 00:45:46 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:45:46.697550 | orchestrator | 2026-03-19 00:45:46 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:45:49.789973 | orchestrator | 2026-03-19 00:45:49 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:45:49.790141 | orchestrator | 2026-03-19 00:45:49 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:45:49.790159 | orchestrator | 2026-03-19 00:45:49 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:45:49.790170 | orchestrator | 2026-03-19 00:45:49 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:45:49.790211 | orchestrator | 2026-03-19 00:45:49 | INFO  | Task 8d5e9b4a-b322-4e50-a07c-c105c1a6728f is in state STARTED 2026-03-19 00:45:49.790222 | orchestrator | 2026-03-19 00:45:49 | INFO  | Task 2d3399fa-aae9-44f7-83f0-3734637d195d is in state STARTED 2026-03-19 00:45:49.790231 | orchestrator | 2026-03-19 00:45:49 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:45:49.790241 | orchestrator | 2026-03-19 00:45:49 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:45:52.922155 | orchestrator | 2026-03-19 00:45:52 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:45:52.922299 | orchestrator | 2026-03-19 00:45:52 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:45:52.922314 | orchestrator | 2026-03-19 00:45:52 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:45:52.922323 | orchestrator | 2026-03-19 00:45:52 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:45:52.922329 | orchestrator | 2026-03-19 00:45:52 | INFO  | Task 8d5e9b4a-b322-4e50-a07c-c105c1a6728f is in state STARTED 2026-03-19 00:45:52.922336 | orchestrator | 2026-03-19 00:45:52 | INFO  | Task 2d3399fa-aae9-44f7-83f0-3734637d195d is in state STARTED 2026-03-19 00:45:52.922343 | orchestrator | 2026-03-19 00:45:52 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:45:52.922351 | orchestrator | 2026-03-19 00:45:52 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:45:55.976959 | orchestrator | 2026-03-19 00:45:55 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:45:55.977983 | orchestrator | 2026-03-19 00:45:55 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:45:55.978632 | orchestrator | 2026-03-19 00:45:55 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:45:55.979583 | orchestrator | 2026-03-19 00:45:55 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:45:55.980332 | orchestrator | 2026-03-19 00:45:55 | INFO  | Task 8d5e9b4a-b322-4e50-a07c-c105c1a6728f is in state STARTED 2026-03-19 00:45:55.982862 | orchestrator | 2026-03-19 00:45:55 | INFO  | Task 2d3399fa-aae9-44f7-83f0-3734637d195d is in state STARTED 2026-03-19 00:45:55.984975 | orchestrator | 2026-03-19 00:45:55 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:45:55.985014 | orchestrator | 2026-03-19 00:45:55 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:45:59.099124 | orchestrator | 2026-03-19 00:45:59 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:45:59.101437 | orchestrator | 2026-03-19 00:45:59 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:45:59.104253 | orchestrator | 2026-03-19 00:45:59 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:45:59.104797 | orchestrator | 2026-03-19 00:45:59 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:45:59.106443 | orchestrator | 2026-03-19 00:45:59 | INFO  | Task 8d5e9b4a-b322-4e50-a07c-c105c1a6728f is in state STARTED 2026-03-19 00:45:59.106939 | orchestrator | 2026-03-19 00:45:59 | INFO  | Task 2d3399fa-aae9-44f7-83f0-3734637d195d is in state STARTED 2026-03-19 00:45:59.107946 | orchestrator | 2026-03-19 00:45:59 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:45:59.108688 | orchestrator | 2026-03-19 00:45:59 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:02.308861 | orchestrator | 2026-03-19 00:46:02.308973 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 00:46:02.308989 | orchestrator | 2026-03-19 00:46:02.309001 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 00:46:02.309012 | orchestrator | Thursday 19 March 2026 00:43:28 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-03-19 00:46:02.309022 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:46:02.309033 | orchestrator | 2026-03-19 00:46:02.309043 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 00:46:02.309054 | orchestrator | Thursday 19 March 2026 00:43:28 +0000 (0:00:00.095) 0:00:00.364 ******** 2026-03-19 00:46:02.309064 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-19 00:46:02.309075 | orchestrator | 2026-03-19 00:46:02.309085 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-19 00:46:02.309094 | orchestrator | 2026-03-19 00:46:02.309105 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-19 00:46:02.309115 | orchestrator | Thursday 19 March 2026 00:43:28 +0000 (0:00:00.120) 0:00:00.485 ******** 2026-03-19 00:46:02.309125 | orchestrator | included: /ansible/roles/opensearch/tasks/pull.yml for testbed-node-0 2026-03-19 00:46:02.309135 | orchestrator | 2026-03-19 00:46:02.309145 | orchestrator | TASK [service-images-pull : opensearch | Pull images] ************************** 2026-03-19 00:46:02.309155 | orchestrator | Thursday 19 March 2026 00:43:29 +0000 (0:00:00.156) 0:00:00.641 ******** 2026-03-19 00:46:02.309165 | orchestrator | changed: [testbed-node-0] => (item=opensearch) 2026-03-19 00:46:02.309176 | orchestrator | changed: [testbed-node-0] => (item=opensearch-dashboards) 2026-03-19 00:46:02.309186 | orchestrator | 2026-03-19 00:46:02.309195 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:46:02.309206 | orchestrator | testbed-node-0 : ok=4  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:46:02.309218 | orchestrator | 2026-03-19 00:46:02.309228 | orchestrator | 2026-03-19 00:46:02.309239 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:46:02.309249 | orchestrator | Thursday 19 March 2026 00:45:41 +0000 (0:02:12.603) 0:02:13.244 ******** 2026-03-19 00:46:02.309259 | orchestrator | =============================================================================== 2026-03-19 00:46:02.309269 | orchestrator | service-images-pull : opensearch | Pull images ------------------------ 132.60s 2026-03-19 00:46:02.309279 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.16s 2026-03-19 00:46:02.309289 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.12s 2026-03-19 00:46:02.309299 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.10s 2026-03-19 00:46:02.309309 | orchestrator | 2026-03-19 00:46:02.309319 | orchestrator | 2026-03-19 00:46:02.309329 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-19 00:46:02.309338 | orchestrator | 2026-03-19 00:46:02.309348 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-19 00:46:02.309358 | orchestrator | Thursday 19 March 2026 00:45:46 +0000 (0:00:01.078) 0:00:01.078 ******** 2026-03-19 00:46:02.309368 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:46:02.309406 | orchestrator | changed: [testbed-manager] 2026-03-19 00:46:02.309417 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:46:02.309428 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:46:02.309438 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:46:02.309449 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:46:02.309460 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:46:02.309470 | orchestrator | 2026-03-19 00:46:02.309481 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-19 00:46:02.309493 | orchestrator | Thursday 19 March 2026 00:45:52 +0000 (0:00:05.559) 0:00:06.638 ******** 2026-03-19 00:46:02.309504 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-19 00:46:02.309550 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-19 00:46:02.309562 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-19 00:46:02.309572 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-19 00:46:02.309583 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-19 00:46:02.309594 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-19 00:46:02.309605 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-19 00:46:02.309615 | orchestrator | 2026-03-19 00:46:02.309625 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-19 00:46:02.309637 | orchestrator | Thursday 19 March 2026 00:45:53 +0000 (0:00:01.512) 0:00:08.151 ******** 2026-03-19 00:46:02.309653 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-19 00:45:53.322303', 'end': '2026-03-19 00:45:53.326626', 'delta': '0:00:00.004323', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-19 00:46:02.309706 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-19 00:45:52.867785', 'end': '2026-03-19 00:45:52.875531', 'delta': '0:00:00.007746', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-19 00:46:02.309719 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-19 00:45:52.789385', 'end': '2026-03-19 00:45:52.798414', 'delta': '0:00:00.009029', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-19 00:46:02.309731 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-19 00:45:52.880838', 'end': '2026-03-19 00:45:52.888093', 'delta': '0:00:00.007255', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-19 00:46:02.309751 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-19 00:45:52.952800', 'end': '2026-03-19 00:45:52.963076', 'delta': '0:00:00.010276', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-19 00:46:02.309762 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-19 00:45:53.119695', 'end': '2026-03-19 00:45:53.128146', 'delta': '0:00:00.008451', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-19 00:46:02.309786 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-19 00:45:53.410633', 'end': '2026-03-19 00:45:53.418910', 'delta': '0:00:00.008277', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-19 00:46:02.309797 | orchestrator | 2026-03-19 00:46:02.309807 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-19 00:46:02.309818 | orchestrator | Thursday 19 March 2026 00:45:56 +0000 (0:00:02.385) 0:00:10.537 ******** 2026-03-19 00:46:02.309827 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-19 00:46:02.309837 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-19 00:46:02.309847 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-19 00:46:02.309857 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-19 00:46:02.309866 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-19 00:46:02.309876 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-19 00:46:02.309886 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-19 00:46:02.309895 | orchestrator | 2026-03-19 00:46:02.309906 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-19 00:46:02.309916 | orchestrator | Thursday 19 March 2026 00:45:59 +0000 (0:00:02.943) 0:00:13.480 ******** 2026-03-19 00:46:02.309926 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-19 00:46:02.309935 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-19 00:46:02.309945 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-19 00:46:02.309954 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-19 00:46:02.309964 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-19 00:46:02.309981 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-19 00:46:02.309991 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-19 00:46:02.310001 | orchestrator | 2026-03-19 00:46:02.310010 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:46:02.310091 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:46:02.310104 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:46:02.310116 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:46:02.310126 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:46:02.310136 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:46:02.310147 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:46:02.310156 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:46:02.310166 | orchestrator | 2026-03-19 00:46:02.310176 | orchestrator | 2026-03-19 00:46:02.310185 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:46:02.310195 | orchestrator | Thursday 19 March 2026 00:46:01 +0000 (0:00:02.345) 0:00:15.826 ******** 2026-03-19 00:46:02.310205 | orchestrator | =============================================================================== 2026-03-19 00:46:02.310214 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.56s 2026-03-19 00:46:02.310225 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.94s 2026-03-19 00:46:02.310234 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.39s 2026-03-19 00:46:02.310245 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.35s 2026-03-19 00:46:02.310255 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.51s 2026-03-19 00:46:02.310265 | orchestrator | 2026-03-19 00:46:02 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:46:02.310276 | orchestrator | 2026-03-19 00:46:02 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:02.310287 | orchestrator | 2026-03-19 00:46:02 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:02.310315 | orchestrator | 2026-03-19 00:46:02 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:02.310326 | orchestrator | 2026-03-19 00:46:02 | INFO  | Task 8d5e9b4a-b322-4e50-a07c-c105c1a6728f is in state SUCCESS 2026-03-19 00:46:02.310337 | orchestrator | 2026-03-19 00:46:02 | INFO  | Task 2d3399fa-aae9-44f7-83f0-3734637d195d is in state STARTED 2026-03-19 00:46:02.310348 | orchestrator | 2026-03-19 00:46:02 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:02.310359 | orchestrator | 2026-03-19 00:46:02 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:05.404418 | orchestrator | 2026-03-19 00:46:05 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:46:05.404493 | orchestrator | 2026-03-19 00:46:05 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:05.404501 | orchestrator | 2026-03-19 00:46:05 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:05.404537 | orchestrator | 2026-03-19 00:46:05 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:05.404544 | orchestrator | 2026-03-19 00:46:05 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:05.407488 | orchestrator | 2026-03-19 00:46:05 | INFO  | Task 2d3399fa-aae9-44f7-83f0-3734637d195d is in state STARTED 2026-03-19 00:46:05.408062 | orchestrator | 2026-03-19 00:46:05 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:05.408098 | orchestrator | 2026-03-19 00:46:05 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:08.453368 | orchestrator | 2026-03-19 00:46:08 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:46:08.453509 | orchestrator | 2026-03-19 00:46:08 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:08.453517 | orchestrator | 2026-03-19 00:46:08 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:08.457046 | orchestrator | 2026-03-19 00:46:08 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:08.457145 | orchestrator | 2026-03-19 00:46:08 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:08.457150 | orchestrator | 2026-03-19 00:46:08 | INFO  | Task 2d3399fa-aae9-44f7-83f0-3734637d195d is in state STARTED 2026-03-19 00:46:08.457155 | orchestrator | 2026-03-19 00:46:08 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:08.457160 | orchestrator | 2026-03-19 00:46:08 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:11.491694 | orchestrator | 2026-03-19 00:46:11 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:46:11.491784 | orchestrator | 2026-03-19 00:46:11 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:11.493189 | orchestrator | 2026-03-19 00:46:11 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:11.493586 | orchestrator | 2026-03-19 00:46:11 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:11.494089 | orchestrator | 2026-03-19 00:46:11 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:11.494502 | orchestrator | 2026-03-19 00:46:11 | INFO  | Task 2d3399fa-aae9-44f7-83f0-3734637d195d is in state STARTED 2026-03-19 00:46:11.496186 | orchestrator | 2026-03-19 00:46:11 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:11.496222 | orchestrator | 2026-03-19 00:46:11 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:14.656402 | orchestrator | 2026-03-19 00:46:14 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:46:14.656513 | orchestrator | 2026-03-19 00:46:14 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:14.656522 | orchestrator | 2026-03-19 00:46:14 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:14.656529 | orchestrator | 2026-03-19 00:46:14 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:14.656535 | orchestrator | 2026-03-19 00:46:14 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:14.656542 | orchestrator | 2026-03-19 00:46:14 | INFO  | Task 2d3399fa-aae9-44f7-83f0-3734637d195d is in state STARTED 2026-03-19 00:46:14.657412 | orchestrator | 2026-03-19 00:46:14 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:14.657482 | orchestrator | 2026-03-19 00:46:14 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:17.712601 | orchestrator | 2026-03-19 00:46:17 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:46:17.714242 | orchestrator | 2026-03-19 00:46:17 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:17.714775 | orchestrator | 2026-03-19 00:46:17 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:17.715814 | orchestrator | 2026-03-19 00:46:17 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:17.716359 | orchestrator | 2026-03-19 00:46:17 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:17.718325 | orchestrator | 2026-03-19 00:46:17 | INFO  | Task 2d3399fa-aae9-44f7-83f0-3734637d195d is in state STARTED 2026-03-19 00:46:17.719915 | orchestrator | 2026-03-19 00:46:17 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:17.720098 | orchestrator | 2026-03-19 00:46:17 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:20.767083 | orchestrator | 2026-03-19 00:46:20 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:46:20.767142 | orchestrator | 2026-03-19 00:46:20 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:20.767148 | orchestrator | 2026-03-19 00:46:20 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:20.782090 | orchestrator | 2026-03-19 00:46:20 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:20.786582 | orchestrator | 2026-03-19 00:46:20 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:20.792820 | orchestrator | 2026-03-19 00:46:20 | INFO  | Task 2d3399fa-aae9-44f7-83f0-3734637d195d is in state STARTED 2026-03-19 00:46:20.795541 | orchestrator | 2026-03-19 00:46:20 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:20.795609 | orchestrator | 2026-03-19 00:46:20 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:23.849900 | orchestrator | 2026-03-19 00:46:23 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:46:23.849984 | orchestrator | 2026-03-19 00:46:23 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:23.850000 | orchestrator | 2026-03-19 00:46:23 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:23.850078 | orchestrator | 2026-03-19 00:46:23 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:23.853706 | orchestrator | 2026-03-19 00:46:23 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:23.853764 | orchestrator | 2026-03-19 00:46:23 | INFO  | Task 2d3399fa-aae9-44f7-83f0-3734637d195d is in state SUCCESS 2026-03-19 00:46:23.853775 | orchestrator | 2026-03-19 00:46:23 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:23.854152 | orchestrator | 2026-03-19 00:46:23 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:26.961923 | orchestrator | 2026-03-19 00:46:26 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:46:26.962674 | orchestrator | 2026-03-19 00:46:26 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:26.964680 | orchestrator | 2026-03-19 00:46:26 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:26.965999 | orchestrator | 2026-03-19 00:46:26 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:26.967527 | orchestrator | 2026-03-19 00:46:26 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:26.968781 | orchestrator | 2026-03-19 00:46:26 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:26.969154 | orchestrator | 2026-03-19 00:46:26 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:30.223219 | orchestrator | 2026-03-19 00:46:30 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:46:30.223311 | orchestrator | 2026-03-19 00:46:30 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:30.223353 | orchestrator | 2026-03-19 00:46:30 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:30.223433 | orchestrator | 2026-03-19 00:46:30 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:30.223443 | orchestrator | 2026-03-19 00:46:30 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:30.223449 | orchestrator | 2026-03-19 00:46:30 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:30.223456 | orchestrator | 2026-03-19 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:33.074212 | orchestrator | 2026-03-19 00:46:33 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state STARTED 2026-03-19 00:46:33.074301 | orchestrator | 2026-03-19 00:46:33 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:33.084151 | orchestrator | 2026-03-19 00:46:33 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:33.084209 | orchestrator | 2026-03-19 00:46:33 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:33.096648 | orchestrator | 2026-03-19 00:46:33 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:33.100875 | orchestrator | 2026-03-19 00:46:33 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:33.100950 | orchestrator | 2026-03-19 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:36.141255 | orchestrator | 2026-03-19 00:46:36 | INFO  | Task f2d9d006-e7f5-429f-9973-e30de3ef0839 is in state SUCCESS 2026-03-19 00:46:36.141333 | orchestrator | 2026-03-19 00:46:36 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:36.142398 | orchestrator | 2026-03-19 00:46:36 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:36.143867 | orchestrator | 2026-03-19 00:46:36 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:36.144051 | orchestrator | 2026-03-19 00:46:36 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:36.145556 | orchestrator | 2026-03-19 00:46:36 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:36.145590 | orchestrator | 2026-03-19 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:39.201769 | orchestrator | 2026-03-19 00:46:39 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:39.209346 | orchestrator | 2026-03-19 00:46:39 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:39.211516 | orchestrator | 2026-03-19 00:46:39 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:39.212387 | orchestrator | 2026-03-19 00:46:39 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:39.213556 | orchestrator | 2026-03-19 00:46:39 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:39.213587 | orchestrator | 2026-03-19 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:42.269065 | orchestrator | 2026-03-19 00:46:42 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:42.270656 | orchestrator | 2026-03-19 00:46:42 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:42.274212 | orchestrator | 2026-03-19 00:46:42 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:42.275740 | orchestrator | 2026-03-19 00:46:42 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:42.277778 | orchestrator | 2026-03-19 00:46:42 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:42.277817 | orchestrator | 2026-03-19 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:45.347912 | orchestrator | 2026-03-19 00:46:45 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:45.350054 | orchestrator | 2026-03-19 00:46:45 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:45.355660 | orchestrator | 2026-03-19 00:46:45 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:45.361609 | orchestrator | 2026-03-19 00:46:45 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:45.364416 | orchestrator | 2026-03-19 00:46:45 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:45.364465 | orchestrator | 2026-03-19 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:48.465743 | orchestrator | 2026-03-19 00:46:48 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:48.466787 | orchestrator | 2026-03-19 00:46:48 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:48.467836 | orchestrator | 2026-03-19 00:46:48 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:48.468954 | orchestrator | 2026-03-19 00:46:48 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:48.470466 | orchestrator | 2026-03-19 00:46:48 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:48.470503 | orchestrator | 2026-03-19 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:51.522222 | orchestrator | 2026-03-19 00:46:51 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:51.523584 | orchestrator | 2026-03-19 00:46:51 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:51.524473 | orchestrator | 2026-03-19 00:46:51 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:51.526195 | orchestrator | 2026-03-19 00:46:51 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:51.527661 | orchestrator | 2026-03-19 00:46:51 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:51.527713 | orchestrator | 2026-03-19 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:54.605471 | orchestrator | 2026-03-19 00:46:54 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:54.605616 | orchestrator | 2026-03-19 00:46:54 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:54.608554 | orchestrator | 2026-03-19 00:46:54 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:54.613192 | orchestrator | 2026-03-19 00:46:54 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:54.618201 | orchestrator | 2026-03-19 00:46:54 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:54.618253 | orchestrator | 2026-03-19 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:46:57.727212 | orchestrator | 2026-03-19 00:46:57 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:46:57.727864 | orchestrator | 2026-03-19 00:46:57 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:46:57.730106 | orchestrator | 2026-03-19 00:46:57 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:46:57.730154 | orchestrator | 2026-03-19 00:46:57 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:46:57.730159 | orchestrator | 2026-03-19 00:46:57 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:46:57.730211 | orchestrator | 2026-03-19 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:00.770633 | orchestrator | 2026-03-19 00:47:00 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:00.771525 | orchestrator | 2026-03-19 00:47:00 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:00.771985 | orchestrator | 2026-03-19 00:47:00 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:00.775216 | orchestrator | 2026-03-19 00:47:00 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:47:00.777752 | orchestrator | 2026-03-19 00:47:00 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:47:00.778984 | orchestrator | 2026-03-19 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:03.831115 | orchestrator | 2026-03-19 00:47:03 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:03.835405 | orchestrator | 2026-03-19 00:47:03 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:03.838617 | orchestrator | 2026-03-19 00:47:03 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:03.839387 | orchestrator | 2026-03-19 00:47:03 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:47:03.841273 | orchestrator | 2026-03-19 00:47:03 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:47:03.842433 | orchestrator | 2026-03-19 00:47:03 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:06.881900 | orchestrator | 2026-03-19 00:47:06 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:06.883023 | orchestrator | 2026-03-19 00:47:06 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:06.887407 | orchestrator | 2026-03-19 00:47:06 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:06.887512 | orchestrator | 2026-03-19 00:47:06 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state STARTED 2026-03-19 00:47:06.888769 | orchestrator | 2026-03-19 00:47:06 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:47:06.889230 | orchestrator | 2026-03-19 00:47:06 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:09.934764 | orchestrator | 2026-03-19 00:47:09 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:09.935544 | orchestrator | 2026-03-19 00:47:09 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:09.938060 | orchestrator | 2026-03-19 00:47:09 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:09.939321 | orchestrator | 2026-03-19 00:47:09 | INFO  | Task 5fb3f850-f697-41e8-a121-44628e46b47a is in state SUCCESS 2026-03-19 00:47:09.940079 | orchestrator | 2026-03-19 00:47:09.940097 | orchestrator | 2026-03-19 00:47:09.940104 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-19 00:47:09.940112 | orchestrator | 2026-03-19 00:47:09.940118 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-19 00:47:09.940125 | orchestrator | Thursday 19 March 2026 00:45:46 +0000 (0:00:00.967) 0:00:00.967 ******** 2026-03-19 00:47:09.940131 | orchestrator | ok: [testbed-manager] => { 2026-03-19 00:47:09.940140 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-19 00:47:09.940148 | orchestrator | } 2026-03-19 00:47:09.940154 | orchestrator | 2026-03-19 00:47:09.940160 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-19 00:47:09.940166 | orchestrator | Thursday 19 March 2026 00:45:47 +0000 (0:00:00.281) 0:00:01.249 ******** 2026-03-19 00:47:09.940172 | orchestrator | ok: [testbed-manager] 2026-03-19 00:47:09.940179 | orchestrator | 2026-03-19 00:47:09.940186 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-19 00:47:09.940192 | orchestrator | Thursday 19 March 2026 00:45:49 +0000 (0:00:02.208) 0:00:03.457 ******** 2026-03-19 00:47:09.940198 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-19 00:47:09.940204 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-19 00:47:09.940210 | orchestrator | 2026-03-19 00:47:09.940217 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-19 00:47:09.940223 | orchestrator | Thursday 19 March 2026 00:45:50 +0000 (0:00:01.561) 0:00:05.018 ******** 2026-03-19 00:47:09.940230 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:09.940236 | orchestrator | 2026-03-19 00:47:09.940242 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-19 00:47:09.940249 | orchestrator | Thursday 19 March 2026 00:45:53 +0000 (0:00:02.821) 0:00:07.840 ******** 2026-03-19 00:47:09.940255 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:09.940263 | orchestrator | 2026-03-19 00:47:09.940267 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-19 00:47:09.940272 | orchestrator | Thursday 19 March 2026 00:45:55 +0000 (0:00:02.059) 0:00:09.899 ******** 2026-03-19 00:47:09.940276 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-19 00:47:09.940281 | orchestrator | ok: [testbed-manager] 2026-03-19 00:47:09.940285 | orchestrator | 2026-03-19 00:47:09.940289 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-19 00:47:09.940293 | orchestrator | Thursday 19 March 2026 00:46:21 +0000 (0:00:25.286) 0:00:35.186 ******** 2026-03-19 00:47:09.940297 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:09.940300 | orchestrator | 2026-03-19 00:47:09.940305 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:47:09.940310 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:47:09.940316 | orchestrator | 2026-03-19 00:47:09.940320 | orchestrator | 2026-03-19 00:47:09.940324 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:47:09.940327 | orchestrator | Thursday 19 March 2026 00:46:23 +0000 (0:00:02.438) 0:00:37.625 ******** 2026-03-19 00:47:09.940372 | orchestrator | =============================================================================== 2026-03-19 00:47:09.940377 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.29s 2026-03-19 00:47:09.940381 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.82s 2026-03-19 00:47:09.940385 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.44s 2026-03-19 00:47:09.940389 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.21s 2026-03-19 00:47:09.940399 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.06s 2026-03-19 00:47:09.940403 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.56s 2026-03-19 00:47:09.940407 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.28s 2026-03-19 00:47:09.940411 | orchestrator | 2026-03-19 00:47:09.940414 | orchestrator | 2026-03-19 00:47:09.940418 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-19 00:47:09.940422 | orchestrator | 2026-03-19 00:47:09.940426 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-19 00:47:09.940429 | orchestrator | Thursday 19 March 2026 00:45:47 +0000 (0:00:00.579) 0:00:00.579 ******** 2026-03-19 00:47:09.940433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-19 00:47:09.940439 | orchestrator | 2026-03-19 00:47:09.940443 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-19 00:47:09.940447 | orchestrator | Thursday 19 March 2026 00:45:47 +0000 (0:00:00.177) 0:00:00.756 ******** 2026-03-19 00:47:09.940450 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-19 00:47:09.940454 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-19 00:47:09.940458 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-19 00:47:09.940462 | orchestrator | 2026-03-19 00:47:09.940466 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-19 00:47:09.940469 | orchestrator | Thursday 19 March 2026 00:45:50 +0000 (0:00:02.536) 0:00:03.293 ******** 2026-03-19 00:47:09.940473 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:09.940477 | orchestrator | 2026-03-19 00:47:09.940482 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-19 00:47:09.940486 | orchestrator | Thursday 19 March 2026 00:45:53 +0000 (0:00:02.688) 0:00:05.981 ******** 2026-03-19 00:47:09.940498 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-19 00:47:09.940502 | orchestrator | ok: [testbed-manager] 2026-03-19 00:47:09.940506 | orchestrator | 2026-03-19 00:47:09.940510 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-19 00:47:09.940514 | orchestrator | Thursday 19 March 2026 00:46:26 +0000 (0:00:33.485) 0:00:39.467 ******** 2026-03-19 00:47:09.940517 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:09.940521 | orchestrator | 2026-03-19 00:47:09.940525 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-19 00:47:09.940529 | orchestrator | Thursday 19 March 2026 00:46:28 +0000 (0:00:01.682) 0:00:41.149 ******** 2026-03-19 00:47:09.940533 | orchestrator | ok: [testbed-manager] 2026-03-19 00:47:09.940537 | orchestrator | 2026-03-19 00:47:09.940541 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-19 00:47:09.940545 | orchestrator | Thursday 19 March 2026 00:46:29 +0000 (0:00:00.884) 0:00:42.034 ******** 2026-03-19 00:47:09.940548 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:09.940552 | orchestrator | 2026-03-19 00:47:09.940556 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-19 00:47:09.940560 | orchestrator | Thursday 19 March 2026 00:46:32 +0000 (0:00:03.167) 0:00:45.201 ******** 2026-03-19 00:47:09.940568 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:09.940572 | orchestrator | 2026-03-19 00:47:09.940576 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-19 00:47:09.940580 | orchestrator | Thursday 19 March 2026 00:46:33 +0000 (0:00:01.083) 0:00:46.285 ******** 2026-03-19 00:47:09.940584 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:09.940587 | orchestrator | 2026-03-19 00:47:09.940591 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-19 00:47:09.940595 | orchestrator | Thursday 19 March 2026 00:46:34 +0000 (0:00:00.835) 0:00:47.121 ******** 2026-03-19 00:47:09.940599 | orchestrator | ok: [testbed-manager] 2026-03-19 00:47:09.940603 | orchestrator | 2026-03-19 00:47:09.940606 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:47:09.940610 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:47:09.940616 | orchestrator | 2026-03-19 00:47:09.940622 | orchestrator | 2026-03-19 00:47:09.940628 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:47:09.940634 | orchestrator | Thursday 19 March 2026 00:46:34 +0000 (0:00:00.387) 0:00:47.509 ******** 2026-03-19 00:47:09.940641 | orchestrator | =============================================================================== 2026-03-19 00:47:09.940647 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.49s 2026-03-19 00:47:09.940653 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.17s 2026-03-19 00:47:09.940659 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.69s 2026-03-19 00:47:09.940666 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.54s 2026-03-19 00:47:09.940672 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.68s 2026-03-19 00:47:09.940677 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.08s 2026-03-19 00:47:09.940683 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.88s 2026-03-19 00:47:09.940690 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.84s 2026-03-19 00:47:09.940696 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.39s 2026-03-19 00:47:09.940702 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.18s 2026-03-19 00:47:09.940709 | orchestrator | 2026-03-19 00:47:09.940716 | orchestrator | 2026-03-19 00:47:09.940722 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-19 00:47:09.940729 | orchestrator | 2026-03-19 00:47:09.940741 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-19 00:47:09.940746 | orchestrator | Thursday 19 March 2026 00:46:06 +0000 (0:00:00.359) 0:00:00.360 ******** 2026-03-19 00:47:09.940750 | orchestrator | ok: [testbed-manager] 2026-03-19 00:47:09.940754 | orchestrator | 2026-03-19 00:47:09.940759 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-19 00:47:09.940763 | orchestrator | Thursday 19 March 2026 00:46:07 +0000 (0:00:01.295) 0:00:01.655 ******** 2026-03-19 00:47:09.940768 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-19 00:47:09.940772 | orchestrator | 2026-03-19 00:47:09.940777 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-19 00:47:09.940781 | orchestrator | Thursday 19 March 2026 00:46:08 +0000 (0:00:00.567) 0:00:02.223 ******** 2026-03-19 00:47:09.940785 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:09.940789 | orchestrator | 2026-03-19 00:47:09.940794 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-19 00:47:09.940798 | orchestrator | Thursday 19 March 2026 00:46:09 +0000 (0:00:01.153) 0:00:03.376 ******** 2026-03-19 00:47:09.940802 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-19 00:47:09.940807 | orchestrator | ok: [testbed-manager] 2026-03-19 00:47:09.940811 | orchestrator | 2026-03-19 00:47:09.940821 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-19 00:47:09.940826 | orchestrator | Thursday 19 March 2026 00:47:03 +0000 (0:00:54.501) 0:00:57.877 ******** 2026-03-19 00:47:09.940830 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:09.940834 | orchestrator | 2026-03-19 00:47:09.940838 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:47:09.940843 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:47:09.940847 | orchestrator | 2026-03-19 00:47:09.940851 | orchestrator | 2026-03-19 00:47:09.940856 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:47:09.940865 | orchestrator | Thursday 19 March 2026 00:47:07 +0000 (0:00:03.878) 0:01:01.756 ******** 2026-03-19 00:47:09.940869 | orchestrator | =============================================================================== 2026-03-19 00:47:09.940874 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 54.50s 2026-03-19 00:47:09.940878 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.88s 2026-03-19 00:47:09.940883 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.30s 2026-03-19 00:47:09.940887 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.15s 2026-03-19 00:47:09.940891 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.57s 2026-03-19 00:47:09.941667 | orchestrator | 2026-03-19 00:47:09 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:47:09.941708 | orchestrator | 2026-03-19 00:47:09 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:12.979040 | orchestrator | 2026-03-19 00:47:12 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:12.980303 | orchestrator | 2026-03-19 00:47:12 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:12.982865 | orchestrator | 2026-03-19 00:47:12 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:12.986006 | orchestrator | 2026-03-19 00:47:12 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:47:12.987033 | orchestrator | 2026-03-19 00:47:12 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:16.042664 | orchestrator | 2026-03-19 00:47:16 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:16.042740 | orchestrator | 2026-03-19 00:47:16 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:16.043956 | orchestrator | 2026-03-19 00:47:16 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:16.044603 | orchestrator | 2026-03-19 00:47:16 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state STARTED 2026-03-19 00:47:16.047191 | orchestrator | 2026-03-19 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:19.125068 | orchestrator | 2026-03-19 00:47:19 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:19.127617 | orchestrator | 2026-03-19 00:47:19 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:19.129761 | orchestrator | 2026-03-19 00:47:19 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:19.131517 | orchestrator | 2026-03-19 00:47:19 | INFO  | Task 1dd69546-d476-4530-8187-6cd885ed750f is in state SUCCESS 2026-03-19 00:47:19.132227 | orchestrator | 2026-03-19 00:47:19.132264 | orchestrator | 2026-03-19 00:47:19.132271 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 00:47:19.132277 | orchestrator | 2026-03-19 00:47:19.132283 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 00:47:19.132304 | orchestrator | Thursday 19 March 2026 00:45:46 +0000 (0:00:00.499) 0:00:00.499 ******** 2026-03-19 00:47:19.132310 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-19 00:47:19.132316 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-19 00:47:19.132321 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-19 00:47:19.132326 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-19 00:47:19.132332 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-19 00:47:19.132366 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-19 00:47:19.132373 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-19 00:47:19.132378 | orchestrator | 2026-03-19 00:47:19.132383 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-19 00:47:19.132388 | orchestrator | 2026-03-19 00:47:19.132394 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-19 00:47:19.132399 | orchestrator | Thursday 19 March 2026 00:45:49 +0000 (0:00:03.193) 0:00:03.693 ******** 2026-03-19 00:47:19.132416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:47:19.132422 | orchestrator | 2026-03-19 00:47:19.132428 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-19 00:47:19.132433 | orchestrator | Thursday 19 March 2026 00:45:50 +0000 (0:00:01.448) 0:00:05.141 ******** 2026-03-19 00:47:19.132438 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:47:19.132444 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:47:19.132449 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:47:19.132454 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:47:19.132459 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:47:19.132464 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:47:19.132470 | orchestrator | ok: [testbed-manager] 2026-03-19 00:47:19.132475 | orchestrator | 2026-03-19 00:47:19.132480 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-19 00:47:19.132485 | orchestrator | Thursday 19 March 2026 00:45:53 +0000 (0:00:02.878) 0:00:08.020 ******** 2026-03-19 00:47:19.132490 | orchestrator | ok: [testbed-manager] 2026-03-19 00:47:19.132495 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:47:19.132500 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:47:19.132505 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:47:19.132510 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:47:19.132516 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:47:19.132521 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:47:19.132526 | orchestrator | 2026-03-19 00:47:19.132531 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-19 00:47:19.132536 | orchestrator | Thursday 19 March 2026 00:45:57 +0000 (0:00:03.740) 0:00:11.760 ******** 2026-03-19 00:47:19.132542 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:47:19.132547 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:47:19.132552 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:19.132557 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:47:19.132562 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:47:19.132567 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:47:19.132572 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:47:19.132577 | orchestrator | 2026-03-19 00:47:19.132582 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-19 00:47:19.132587 | orchestrator | Thursday 19 March 2026 00:45:59 +0000 (0:00:02.143) 0:00:13.903 ******** 2026-03-19 00:47:19.132592 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:19.132598 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:47:19.132603 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:47:19.132609 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:47:19.132619 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:47:19.132625 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:47:19.132630 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:47:19.132635 | orchestrator | 2026-03-19 00:47:19.132641 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-19 00:47:19.132646 | orchestrator | Thursday 19 March 2026 00:46:09 +0000 (0:00:10.109) 0:00:24.013 ******** 2026-03-19 00:47:19.132651 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:47:19.132656 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:47:19.132661 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:47:19.132666 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:47:19.132671 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:47:19.132676 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:47:19.132682 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:19.132687 | orchestrator | 2026-03-19 00:47:19.132719 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-19 00:47:19.132725 | orchestrator | Thursday 19 March 2026 00:46:48 +0000 (0:00:39.225) 0:01:03.238 ******** 2026-03-19 00:47:19.132731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:47:19.132737 | orchestrator | 2026-03-19 00:47:19.132742 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-19 00:47:19.132748 | orchestrator | Thursday 19 March 2026 00:46:50 +0000 (0:00:01.307) 0:01:04.545 ******** 2026-03-19 00:47:19.132755 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-19 00:47:19.132761 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-19 00:47:19.132766 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-19 00:47:19.132771 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-19 00:47:19.132786 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-19 00:47:19.132792 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-19 00:47:19.132797 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-19 00:47:19.132802 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-19 00:47:19.132807 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-19 00:47:19.132814 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-19 00:47:19.132820 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-19 00:47:19.132825 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-19 00:47:19.132830 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-19 00:47:19.132835 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-19 00:47:19.132840 | orchestrator | 2026-03-19 00:47:19.132846 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-19 00:47:19.132851 | orchestrator | Thursday 19 March 2026 00:46:54 +0000 (0:00:04.256) 0:01:08.802 ******** 2026-03-19 00:47:19.132857 | orchestrator | ok: [testbed-manager] 2026-03-19 00:47:19.132862 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:47:19.132867 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:47:19.132873 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:47:19.132878 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:47:19.132884 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:47:19.132889 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:47:19.132895 | orchestrator | 2026-03-19 00:47:19.132900 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-19 00:47:19.132906 | orchestrator | Thursday 19 March 2026 00:46:56 +0000 (0:00:01.646) 0:01:10.448 ******** 2026-03-19 00:47:19.132911 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:47:19.132917 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:47:19.132922 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:19.132927 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:47:19.132936 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:47:19.132941 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:47:19.132947 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:47:19.132952 | orchestrator | 2026-03-19 00:47:19.132958 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-19 00:47:19.132963 | orchestrator | Thursday 19 March 2026 00:46:57 +0000 (0:00:01.481) 0:01:11.930 ******** 2026-03-19 00:47:19.132968 | orchestrator | ok: [testbed-manager] 2026-03-19 00:47:19.132974 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:47:19.132979 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:47:19.132984 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:47:19.132989 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:47:19.132995 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:47:19.133000 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:47:19.133005 | orchestrator | 2026-03-19 00:47:19.133010 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-19 00:47:19.133016 | orchestrator | Thursday 19 March 2026 00:46:59 +0000 (0:00:01.708) 0:01:13.639 ******** 2026-03-19 00:47:19.133021 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:47:19.133026 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:47:19.133032 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:47:19.133037 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:47:19.133042 | orchestrator | ok: [testbed-manager] 2026-03-19 00:47:19.133047 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:47:19.133052 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:47:19.133057 | orchestrator | 2026-03-19 00:47:19.133063 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-19 00:47:19.133069 | orchestrator | Thursday 19 March 2026 00:47:01 +0000 (0:00:01.892) 0:01:15.531 ******** 2026-03-19 00:47:19.133074 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-19 00:47:19.133082 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:47:19.133087 | orchestrator | 2026-03-19 00:47:19.133095 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-19 00:47:19.133101 | orchestrator | Thursday 19 March 2026 00:47:02 +0000 (0:00:01.391) 0:01:16.923 ******** 2026-03-19 00:47:19.133107 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:19.133113 | orchestrator | 2026-03-19 00:47:19.133118 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-19 00:47:19.133123 | orchestrator | Thursday 19 March 2026 00:47:05 +0000 (0:00:02.552) 0:01:19.475 ******** 2026-03-19 00:47:19.133129 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:47:19.133135 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:47:19.133140 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:47:19.133146 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:47:19.133150 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:47:19.133155 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:47:19.133161 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:19.133166 | orchestrator | 2026-03-19 00:47:19.133171 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:47:19.133177 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:47:19.133183 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:47:19.133189 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:47:19.133194 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:47:19.133208 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:47:19.133215 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:47:19.133223 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:47:19.133228 | orchestrator | 2026-03-19 00:47:19.133234 | orchestrator | 2026-03-19 00:47:19.133239 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:47:19.133244 | orchestrator | Thursday 19 March 2026 00:47:17 +0000 (0:00:12.514) 0:01:31.989 ******** 2026-03-19 00:47:19.133250 | orchestrator | =============================================================================== 2026-03-19 00:47:19.133255 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.23s 2026-03-19 00:47:19.133261 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 12.51s 2026-03-19 00:47:19.133266 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.11s 2026-03-19 00:47:19.133271 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.26s 2026-03-19 00:47:19.133276 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.74s 2026-03-19 00:47:19.133282 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.19s 2026-03-19 00:47:19.133287 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.88s 2026-03-19 00:47:19.133292 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.55s 2026-03-19 00:47:19.133298 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.14s 2026-03-19 00:47:19.133303 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.89s 2026-03-19 00:47:19.133308 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.71s 2026-03-19 00:47:19.133313 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.65s 2026-03-19 00:47:19.133318 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.48s 2026-03-19 00:47:19.133324 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.45s 2026-03-19 00:47:19.133329 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.39s 2026-03-19 00:47:19.133334 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.31s 2026-03-19 00:47:19.133355 | orchestrator | 2026-03-19 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:22.207452 | orchestrator | 2026-03-19 00:47:22 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:22.208866 | orchestrator | 2026-03-19 00:47:22 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:22.210965 | orchestrator | 2026-03-19 00:47:22 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:22.211011 | orchestrator | 2026-03-19 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:25.303449 | orchestrator | 2026-03-19 00:47:25 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:25.306287 | orchestrator | 2026-03-19 00:47:25 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:25.307701 | orchestrator | 2026-03-19 00:47:25 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:25.308762 | orchestrator | 2026-03-19 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:28.410921 | orchestrator | 2026-03-19 00:47:28 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:28.412274 | orchestrator | 2026-03-19 00:47:28 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:28.415217 | orchestrator | 2026-03-19 00:47:28 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:28.415264 | orchestrator | 2026-03-19 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:31.460768 | orchestrator | 2026-03-19 00:47:31 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:31.461313 | orchestrator | 2026-03-19 00:47:31 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:31.464513 | orchestrator | 2026-03-19 00:47:31 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:31.464566 | orchestrator | 2026-03-19 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:34.512257 | orchestrator | 2026-03-19 00:47:34 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:34.513698 | orchestrator | 2026-03-19 00:47:34 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:34.515942 | orchestrator | 2026-03-19 00:47:34 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:34.515980 | orchestrator | 2026-03-19 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:37.555995 | orchestrator | 2026-03-19 00:47:37 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:37.558497 | orchestrator | 2026-03-19 00:47:37 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:37.559098 | orchestrator | 2026-03-19 00:47:37 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:37.559125 | orchestrator | 2026-03-19 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:40.598379 | orchestrator | 2026-03-19 00:47:40 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:40.601351 | orchestrator | 2026-03-19 00:47:40 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:40.602120 | orchestrator | 2026-03-19 00:47:40 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:40.602711 | orchestrator | 2026-03-19 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:43.656965 | orchestrator | 2026-03-19 00:47:43 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:43.657911 | orchestrator | 2026-03-19 00:47:43 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:43.658916 | orchestrator | 2026-03-19 00:47:43 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:43.659036 | orchestrator | 2026-03-19 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:46.724756 | orchestrator | 2026-03-19 00:47:46 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:46.731815 | orchestrator | 2026-03-19 00:47:46 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:46.733623 | orchestrator | 2026-03-19 00:47:46 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:46.733673 | orchestrator | 2026-03-19 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:49.774652 | orchestrator | 2026-03-19 00:47:49 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:49.775719 | orchestrator | 2026-03-19 00:47:49 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:49.777019 | orchestrator | 2026-03-19 00:47:49 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state STARTED 2026-03-19 00:47:49.777075 | orchestrator | 2026-03-19 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:52.813667 | orchestrator | 2026-03-19 00:47:52 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:47:52.814352 | orchestrator | 2026-03-19 00:47:52 | INFO  | Task dfda3723-dccd-484f-8914-91225dacd482 is in state STARTED 2026-03-19 00:47:52.815218 | orchestrator | 2026-03-19 00:47:52 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:52.815984 | orchestrator | 2026-03-19 00:47:52 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:52.820898 | orchestrator | 2026-03-19 00:47:52 | INFO  | Task 9b388588-8890-4ede-9117-b2c55a21dd34 is in state SUCCESS 2026-03-19 00:47:52.822511 | orchestrator | 2026-03-19 00:47:52.822573 | orchestrator | 2026-03-19 00:47:52.822583 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-19 00:47:52.822590 | orchestrator | 2026-03-19 00:47:52.822596 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-19 00:47:52.822602 | orchestrator | Thursday 19 March 2026 00:45:40 +0000 (0:00:00.332) 0:00:00.332 ******** 2026-03-19 00:47:52.822609 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:47:52.822617 | orchestrator | 2026-03-19 00:47:52.822623 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-19 00:47:52.822629 | orchestrator | Thursday 19 March 2026 00:45:41 +0000 (0:00:01.240) 0:00:01.572 ******** 2026-03-19 00:47:52.822636 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 00:47:52.822643 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 00:47:52.822657 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 00:47:52.822664 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 00:47:52.822674 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 00:47:52.822681 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 00:47:52.822688 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 00:47:52.822695 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 00:47:52.822708 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 00:47:52.822714 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 00:47:52.822720 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 00:47:52.822724 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 00:47:52.822728 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 00:47:52.822732 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 00:47:52.822736 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 00:47:52.822741 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 00:47:52.822748 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 00:47:52.822754 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 00:47:52.822775 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 00:47:52.822783 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 00:47:52.822789 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 00:47:52.822796 | orchestrator | 2026-03-19 00:47:52.822802 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-19 00:47:52.822809 | orchestrator | Thursday 19 March 2026 00:45:45 +0000 (0:00:04.412) 0:00:05.985 ******** 2026-03-19 00:47:52.822813 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:47:52.822817 | orchestrator | 2026-03-19 00:47:52.822821 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-19 00:47:52.822825 | orchestrator | Thursday 19 March 2026 00:45:47 +0000 (0:00:01.390) 0:00:07.376 ******** 2026-03-19 00:47:52.822832 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.822838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.822855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.822862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.822872 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.822879 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.822891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.822898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.822904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.822918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.822924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.822933 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.822939 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.822953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.822968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.822973 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.822977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.822984 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.822989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.822992 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.822996 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.823005 | orchestrator | 2026-03-19 00:47:52.823011 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-19 00:47:52.823017 | orchestrator | Thursday 19 March 2026 00:45:53 +0000 (0:00:05.978) 0:00:13.354 ******** 2026-03-19 00:47:52.823024 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 00:47:52.823030 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823037 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823043 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:47:52.823053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 00:47:52.823064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823078 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:47:52.823085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 00:47:52.823093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 00:47:52.823106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823116 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:47:52.823125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 00:47:52.823130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823145 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:47:52.823151 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:47:52.823157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 00:47:52.823170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823182 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:47:52.823188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 00:47:52.823198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823216 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:47:52.823222 | orchestrator | 2026-03-19 00:47:52.823228 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-19 00:47:52.823233 | orchestrator | Thursday 19 March 2026 00:45:55 +0000 (0:00:02.362) 0:00:15.717 ******** 2026-03-19 00:47:52.823237 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 00:47:52.823248 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823253 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 00:47:52.823262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823275 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:47:52.823280 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:47:52.823298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 00:47:52.823310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823341 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:47:52.823345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 00:47:52.823349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823357 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:47:52.823364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 00:47:52.823685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823756 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:47:52.823762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 00:47:52.823772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823787 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:47:52.823791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 00:47:52.823795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.823806 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:47:52.823810 | orchestrator | 2026-03-19 00:47:52.823815 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-19 00:47:52.823819 | orchestrator | Thursday 19 March 2026 00:45:58 +0000 (0:00:03.019) 0:00:18.736 ******** 2026-03-19 00:47:52.823823 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:47:52.823827 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:47:52.823831 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:47:52.823834 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:47:52.823991 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:47:52.824003 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:47:52.824007 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:47:52.824011 | orchestrator | 2026-03-19 00:47:52.824015 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-19 00:47:52.824019 | orchestrator | Thursday 19 March 2026 00:46:00 +0000 (0:00:01.812) 0:00:20.548 ******** 2026-03-19 00:47:52.824022 | orchestrator | skipping: [testbed-manager] 2026-03-19 00:47:52.824026 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:47:52.824030 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:47:52.824034 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:47:52.824038 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:47:52.824042 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:47:52.824046 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:47:52.824050 | orchestrator | 2026-03-19 00:47:52.824054 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-19 00:47:52.824058 | orchestrator | Thursday 19 March 2026 00:46:02 +0000 (0:00:02.272) 0:00:22.821 ******** 2026-03-19 00:47:52.824062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.824071 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.824075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.824079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.824083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.824092 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.824104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.824112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824124 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824151 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824158 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824185 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824198 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824222 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824229 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824236 | orchestrator | 2026-03-19 00:47:52.824243 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-19 00:47:52.824250 | orchestrator | Thursday 19 March 2026 00:46:07 +0000 (0:00:05.110) 0:00:27.931 ******** 2026-03-19 00:47:52.824256 | orchestrator | [WARNING]: Skipped 2026-03-19 00:47:52.824264 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-19 00:47:52.824271 | orchestrator | to this access issue: 2026-03-19 00:47:52.824279 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-19 00:47:52.824286 | orchestrator | directory 2026-03-19 00:47:52.824292 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 00:47:52.824299 | orchestrator | 2026-03-19 00:47:52.824306 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-19 00:47:52.824312 | orchestrator | Thursday 19 March 2026 00:46:08 +0000 (0:00:00.985) 0:00:28.917 ******** 2026-03-19 00:47:52.824330 | orchestrator | [WARNING]: Skipped 2026-03-19 00:47:52.824337 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-19 00:47:52.824354 | orchestrator | to this access issue: 2026-03-19 00:47:52.824359 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-19 00:47:52.824364 | orchestrator | directory 2026-03-19 00:47:52.824367 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 00:47:52.824371 | orchestrator | 2026-03-19 00:47:52.824375 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-19 00:47:52.824379 | orchestrator | Thursday 19 March 2026 00:46:09 +0000 (0:00:00.891) 0:00:29.808 ******** 2026-03-19 00:47:52.824383 | orchestrator | [WARNING]: Skipped 2026-03-19 00:47:52.824387 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-19 00:47:52.824391 | orchestrator | to this access issue: 2026-03-19 00:47:52.824395 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-19 00:47:52.824398 | orchestrator | directory 2026-03-19 00:47:52.824403 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 00:47:52.824407 | orchestrator | 2026-03-19 00:47:52.824411 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-19 00:47:52.824415 | orchestrator | Thursday 19 March 2026 00:46:10 +0000 (0:00:00.887) 0:00:30.696 ******** 2026-03-19 00:47:52.824419 | orchestrator | [WARNING]: Skipped 2026-03-19 00:47:52.824423 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-19 00:47:52.824426 | orchestrator | to this access issue: 2026-03-19 00:47:52.824430 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-19 00:47:52.824435 | orchestrator | directory 2026-03-19 00:47:52.824439 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 00:47:52.824443 | orchestrator | 2026-03-19 00:47:52.824448 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-19 00:47:52.824464 | orchestrator | Thursday 19 March 2026 00:46:11 +0000 (0:00:00.840) 0:00:31.536 ******** 2026-03-19 00:47:52.824468 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:47:52.824475 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:47:52.824479 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:52.824483 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:47:52.824486 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:47:52.824490 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:47:52.824494 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:47:52.824498 | orchestrator | 2026-03-19 00:47:52.824502 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-19 00:47:52.824506 | orchestrator | Thursday 19 March 2026 00:46:15 +0000 (0:00:03.948) 0:00:35.485 ******** 2026-03-19 00:47:52.824510 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 00:47:52.824514 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 00:47:52.824518 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 00:47:52.824522 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 00:47:52.824526 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 00:47:52.824530 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 00:47:52.824534 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 00:47:52.824538 | orchestrator | 2026-03-19 00:47:52.824542 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-19 00:47:52.824546 | orchestrator | Thursday 19 March 2026 00:46:18 +0000 (0:00:03.016) 0:00:38.501 ******** 2026-03-19 00:47:52.824550 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:47:52.824554 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:47:52.824558 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:52.824562 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:47:52.824566 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:47:52.824570 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:47:52.824574 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:47:52.824578 | orchestrator | 2026-03-19 00:47:52.824583 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-19 00:47:52.824587 | orchestrator | Thursday 19 March 2026 00:46:21 +0000 (0:00:03.272) 0:00:41.773 ******** 2026-03-19 00:47:52.824591 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.824599 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.824604 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.824612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.824619 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.824623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.824627 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824632 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.824637 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.824653 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824657 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824765 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.824779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.824786 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.824793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.824800 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824813 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.824827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:47:52.824833 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824842 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.824849 | orchestrator | 2026-03-19 00:47:52.824855 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-19 00:47:52.824862 | orchestrator | Thursday 19 March 2026 00:46:24 +0000 (0:00:02.917) 0:00:44.691 ******** 2026-03-19 00:47:52.824868 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 00:47:52.824874 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 00:47:52.824892 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 00:47:52.824900 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 00:47:52.824906 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 00:47:52.824912 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 00:47:52.824918 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 00:47:52.824924 | orchestrator | 2026-03-19 00:47:52.824931 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-19 00:47:52.824938 | orchestrator | Thursday 19 March 2026 00:46:28 +0000 (0:00:04.158) 0:00:48.849 ******** 2026-03-19 00:47:52.824944 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 00:47:52.824950 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 00:47:52.824957 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 00:47:52.824963 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 00:47:52.824969 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 00:47:52.824976 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 00:47:52.824982 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 00:47:52.824995 | orchestrator | 2026-03-19 00:47:52.825002 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-19 00:47:52.825008 | orchestrator | Thursday 19 March 2026 00:46:31 +0000 (0:00:03.062) 0:00:51.912 ******** 2026-03-19 00:47:52.825015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.825029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.825037 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.825047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.825054 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.825061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.825068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.825080 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.825091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.825098 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.825107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.825113 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 00:47:52.825120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.825127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.825138 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.825145 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.825155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.825159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.825164 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.825169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.825174 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:47:52.825178 | orchestrator | 2026-03-19 00:47:52.825182 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-19 00:47:52.825186 | orchestrator | Thursday 19 March 2026 00:46:35 +0000 (0:00:03.771) 0:00:55.684 ******** 2026-03-19 00:47:52.825190 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:52.825194 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:47:52.825198 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:47:52.825202 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:47:52.825209 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:47:52.825213 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:47:52.825216 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:47:52.825220 | orchestrator | 2026-03-19 00:47:52.825224 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-19 00:47:52.825228 | orchestrator | Thursday 19 March 2026 00:46:36 +0000 (0:00:01.334) 0:00:57.018 ******** 2026-03-19 00:47:52.825232 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:52.825235 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:47:52.825239 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:47:52.825243 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:47:52.825247 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:47:52.825250 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:47:52.825254 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:47:52.825258 | orchestrator | 2026-03-19 00:47:52.825262 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 00:47:52.825266 | orchestrator | Thursday 19 March 2026 00:46:37 +0000 (0:00:01.263) 0:00:58.281 ******** 2026-03-19 00:47:52.825270 | orchestrator | 2026-03-19 00:47:52.825273 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 00:47:52.825277 | orchestrator | Thursday 19 March 2026 00:46:38 +0000 (0:00:00.063) 0:00:58.345 ******** 2026-03-19 00:47:52.825281 | orchestrator | 2026-03-19 00:47:52.825285 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 00:47:52.825289 | orchestrator | Thursday 19 March 2026 00:46:38 +0000 (0:00:00.061) 0:00:58.406 ******** 2026-03-19 00:47:52.825293 | orchestrator | 2026-03-19 00:47:52.825297 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 00:47:52.825301 | orchestrator | Thursday 19 March 2026 00:46:38 +0000 (0:00:00.060) 0:00:58.467 ******** 2026-03-19 00:47:52.825307 | orchestrator | 2026-03-19 00:47:52.825313 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 00:47:52.825338 | orchestrator | Thursday 19 March 2026 00:46:38 +0000 (0:00:00.066) 0:00:58.533 ******** 2026-03-19 00:47:52.825344 | orchestrator | 2026-03-19 00:47:52.825350 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 00:47:52.825357 | orchestrator | Thursday 19 March 2026 00:46:38 +0000 (0:00:00.066) 0:00:58.599 ******** 2026-03-19 00:47:52.825363 | orchestrator | 2026-03-19 00:47:52.825369 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 00:47:52.825376 | orchestrator | Thursday 19 March 2026 00:46:38 +0000 (0:00:00.068) 0:00:58.668 ******** 2026-03-19 00:47:52.825383 | orchestrator | 2026-03-19 00:47:52.825389 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-19 00:47:52.825400 | orchestrator | Thursday 19 March 2026 00:46:38 +0000 (0:00:00.087) 0:00:58.756 ******** 2026-03-19 00:47:52.825404 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:47:52.825408 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:47:52.825412 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:47:52.825416 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:47:52.825420 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:47:52.825424 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:47:52.825428 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:52.825432 | orchestrator | 2026-03-19 00:47:52.825436 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-19 00:47:52.825439 | orchestrator | Thursday 19 March 2026 00:47:11 +0000 (0:00:32.971) 0:01:31.727 ******** 2026-03-19 00:47:52.825443 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:47:52.825447 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:47:52.825451 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:47:52.825455 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:47:52.825459 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:52.825462 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:47:52.825466 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:47:52.825480 | orchestrator | 2026-03-19 00:47:52.825484 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-19 00:47:52.825488 | orchestrator | Thursday 19 March 2026 00:47:43 +0000 (0:00:32.411) 0:02:04.139 ******** 2026-03-19 00:47:52.825492 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:47:52.825496 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:47:52.825500 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:47:52.825504 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:47:52.825507 | orchestrator | ok: [testbed-manager] 2026-03-19 00:47:52.825511 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:47:52.825515 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:47:52.825519 | orchestrator | 2026-03-19 00:47:52.825522 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-19 00:47:52.825526 | orchestrator | Thursday 19 March 2026 00:47:45 +0000 (0:00:02.022) 0:02:06.162 ******** 2026-03-19 00:47:52.825530 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:47:52.825534 | orchestrator | changed: [testbed-manager] 2026-03-19 00:47:52.825538 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:47:52.825542 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:47:52.825546 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:47:52.825549 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:47:52.825553 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:47:52.825557 | orchestrator | 2026-03-19 00:47:52.825561 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:47:52.825565 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 00:47:52.825570 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 00:47:52.825573 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 00:47:52.825577 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 00:47:52.825581 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 00:47:52.825585 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 00:47:52.825589 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 00:47:52.825593 | orchestrator | 2026-03-19 00:47:52.825596 | orchestrator | 2026-03-19 00:47:52.825600 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:47:52.825604 | orchestrator | Thursday 19 March 2026 00:47:51 +0000 (0:00:05.271) 0:02:11.434 ******** 2026-03-19 00:47:52.825608 | orchestrator | =============================================================================== 2026-03-19 00:47:52.825612 | orchestrator | common : Restart fluentd container ------------------------------------- 32.97s 2026-03-19 00:47:52.825615 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.41s 2026-03-19 00:47:52.825619 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.98s 2026-03-19 00:47:52.825623 | orchestrator | common : Restart cron container ----------------------------------------- 5.27s 2026-03-19 00:47:52.825627 | orchestrator | common : Copying over config.json files for services -------------------- 5.11s 2026-03-19 00:47:52.825631 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.41s 2026-03-19 00:47:52.825634 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.16s 2026-03-19 00:47:52.825638 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.95s 2026-03-19 00:47:52.825645 | orchestrator | common : Check common containers ---------------------------------------- 3.77s 2026-03-19 00:47:52.825649 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.27s 2026-03-19 00:47:52.825653 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.06s 2026-03-19 00:47:52.825657 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.02s 2026-03-19 00:47:52.825660 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.02s 2026-03-19 00:47:52.825665 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.92s 2026-03-19 00:47:52.825673 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.36s 2026-03-19 00:47:52.825677 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.27s 2026-03-19 00:47:52.825681 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.02s 2026-03-19 00:47:52.825684 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.81s 2026-03-19 00:47:52.825688 | orchestrator | common : include_tasks -------------------------------------------------- 1.39s 2026-03-19 00:47:52.825692 | orchestrator | common : Creating log volume -------------------------------------------- 1.33s 2026-03-19 00:47:52.825696 | orchestrator | 2026-03-19 00:47:52 | INFO  | Task 0756b3d3-d4b3-41bf-b69d-d1a8511b51a9 is in state STARTED 2026-03-19 00:47:52.825703 | orchestrator | 2026-03-19 00:47:52 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:47:52.825707 | orchestrator | 2026-03-19 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:55.854532 | orchestrator | 2026-03-19 00:47:55 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:47:55.854991 | orchestrator | 2026-03-19 00:47:55 | INFO  | Task dfda3723-dccd-484f-8914-91225dacd482 is in state STARTED 2026-03-19 00:47:55.855559 | orchestrator | 2026-03-19 00:47:55 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:55.856337 | orchestrator | 2026-03-19 00:47:55 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:55.859651 | orchestrator | 2026-03-19 00:47:55 | INFO  | Task 0756b3d3-d4b3-41bf-b69d-d1a8511b51a9 is in state STARTED 2026-03-19 00:47:55.860219 | orchestrator | 2026-03-19 00:47:55 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:47:55.860251 | orchestrator | 2026-03-19 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:47:58.893154 | orchestrator | 2026-03-19 00:47:58 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:47:58.893743 | orchestrator | 2026-03-19 00:47:58 | INFO  | Task dfda3723-dccd-484f-8914-91225dacd482 is in state STARTED 2026-03-19 00:47:58.894361 | orchestrator | 2026-03-19 00:47:58 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:47:58.895143 | orchestrator | 2026-03-19 00:47:58 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:47:58.896045 | orchestrator | 2026-03-19 00:47:58 | INFO  | Task 0756b3d3-d4b3-41bf-b69d-d1a8511b51a9 is in state STARTED 2026-03-19 00:47:58.897242 | orchestrator | 2026-03-19 00:47:58 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:47:58.897383 | orchestrator | 2026-03-19 00:47:58 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:01.926487 | orchestrator | 2026-03-19 00:48:01 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:01.927944 | orchestrator | 2026-03-19 00:48:01 | INFO  | Task dfda3723-dccd-484f-8914-91225dacd482 is in state STARTED 2026-03-19 00:48:01.927996 | orchestrator | 2026-03-19 00:48:01 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:01.928000 | orchestrator | 2026-03-19 00:48:01 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:01.928741 | orchestrator | 2026-03-19 00:48:01 | INFO  | Task 0756b3d3-d4b3-41bf-b69d-d1a8511b51a9 is in state STARTED 2026-03-19 00:48:01.930542 | orchestrator | 2026-03-19 00:48:01 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:01.930575 | orchestrator | 2026-03-19 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:04.965427 | orchestrator | 2026-03-19 00:48:04 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:04.968725 | orchestrator | 2026-03-19 00:48:04 | INFO  | Task dfda3723-dccd-484f-8914-91225dacd482 is in state STARTED 2026-03-19 00:48:04.971033 | orchestrator | 2026-03-19 00:48:04 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:04.972818 | orchestrator | 2026-03-19 00:48:04 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:04.975253 | orchestrator | 2026-03-19 00:48:04 | INFO  | Task 0756b3d3-d4b3-41bf-b69d-d1a8511b51a9 is in state STARTED 2026-03-19 00:48:04.976868 | orchestrator | 2026-03-19 00:48:04 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:04.976953 | orchestrator | 2026-03-19 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:08.012738 | orchestrator | 2026-03-19 00:48:08 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:08.013223 | orchestrator | 2026-03-19 00:48:08 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:08.013977 | orchestrator | 2026-03-19 00:48:08 | INFO  | Task dfda3723-dccd-484f-8914-91225dacd482 is in state SUCCESS 2026-03-19 00:48:08.014657 | orchestrator | 2026-03-19 00:48:08 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:08.015676 | orchestrator | 2026-03-19 00:48:08 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:08.016625 | orchestrator | 2026-03-19 00:48:08 | INFO  | Task 0756b3d3-d4b3-41bf-b69d-d1a8511b51a9 is in state STARTED 2026-03-19 00:48:08.017513 | orchestrator | 2026-03-19 00:48:08 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:08.017904 | orchestrator | 2026-03-19 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:11.067020 | orchestrator | 2026-03-19 00:48:11 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:11.067578 | orchestrator | 2026-03-19 00:48:11 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:11.068618 | orchestrator | 2026-03-19 00:48:11 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:11.071030 | orchestrator | 2026-03-19 00:48:11 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:11.071669 | orchestrator | 2026-03-19 00:48:11 | INFO  | Task 0756b3d3-d4b3-41bf-b69d-d1a8511b51a9 is in state STARTED 2026-03-19 00:48:11.072464 | orchestrator | 2026-03-19 00:48:11 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:11.072627 | orchestrator | 2026-03-19 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:14.144493 | orchestrator | 2026-03-19 00:48:14 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:14.144553 | orchestrator | 2026-03-19 00:48:14 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:14.144583 | orchestrator | 2026-03-19 00:48:14 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:14.144591 | orchestrator | 2026-03-19 00:48:14 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:14.144594 | orchestrator | 2026-03-19 00:48:14 | INFO  | Task 0756b3d3-d4b3-41bf-b69d-d1a8511b51a9 is in state STARTED 2026-03-19 00:48:14.144598 | orchestrator | 2026-03-19 00:48:14 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:14.144601 | orchestrator | 2026-03-19 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:17.186444 | orchestrator | 2026-03-19 00:48:17 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:17.186496 | orchestrator | 2026-03-19 00:48:17 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:17.186768 | orchestrator | 2026-03-19 00:48:17 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:17.187834 | orchestrator | 2026-03-19 00:48:17 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:17.188446 | orchestrator | 2026-03-19 00:48:17 | INFO  | Task 0756b3d3-d4b3-41bf-b69d-d1a8511b51a9 is in state STARTED 2026-03-19 00:48:17.189289 | orchestrator | 2026-03-19 00:48:17 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:17.189317 | orchestrator | 2026-03-19 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:20.244094 | orchestrator | 2026-03-19 00:48:20 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:20.244469 | orchestrator | 2026-03-19 00:48:20 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:20.245961 | orchestrator | 2026-03-19 00:48:20 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:20.247311 | orchestrator | 2026-03-19 00:48:20 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:20.248257 | orchestrator | 2026-03-19 00:48:20.248288 | orchestrator | 2026-03-19 00:48:20.248292 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 00:48:20.248308 | orchestrator | 2026-03-19 00:48:20.248313 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 00:48:20.248318 | orchestrator | Thursday 19 March 2026 00:47:55 +0000 (0:00:00.305) 0:00:00.305 ******** 2026-03-19 00:48:20.248324 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:48:20.248330 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:48:20.248335 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:48:20.248340 | orchestrator | 2026-03-19 00:48:20.248345 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 00:48:20.248350 | orchestrator | Thursday 19 March 2026 00:47:55 +0000 (0:00:00.443) 0:00:00.748 ******** 2026-03-19 00:48:20.248356 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-19 00:48:20.248362 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-19 00:48:20.248367 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-19 00:48:20.248373 | orchestrator | 2026-03-19 00:48:20.248379 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-19 00:48:20.248384 | orchestrator | 2026-03-19 00:48:20.248390 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-19 00:48:20.248394 | orchestrator | Thursday 19 March 2026 00:47:56 +0000 (0:00:00.383) 0:00:01.132 ******** 2026-03-19 00:48:20.248414 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:48:20.248436 | orchestrator | 2026-03-19 00:48:20.248440 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-19 00:48:20.248443 | orchestrator | Thursday 19 March 2026 00:47:56 +0000 (0:00:00.738) 0:00:01.871 ******** 2026-03-19 00:48:20.248446 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-19 00:48:20.248450 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-19 00:48:20.248453 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-19 00:48:20.248456 | orchestrator | 2026-03-19 00:48:20.248460 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-19 00:48:20.248463 | orchestrator | Thursday 19 March 2026 00:47:58 +0000 (0:00:01.689) 0:00:03.560 ******** 2026-03-19 00:48:20.248474 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-19 00:48:20.248477 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-19 00:48:20.248481 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-19 00:48:20.248484 | orchestrator | 2026-03-19 00:48:20.248487 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-19 00:48:20.248490 | orchestrator | Thursday 19 March 2026 00:48:00 +0000 (0:00:01.864) 0:00:05.424 ******** 2026-03-19 00:48:20.248494 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:48:20.248497 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:48:20.248500 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:48:20.248504 | orchestrator | 2026-03-19 00:48:20.248507 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-19 00:48:20.248510 | orchestrator | Thursday 19 March 2026 00:48:02 +0000 (0:00:02.045) 0:00:07.470 ******** 2026-03-19 00:48:20.248513 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:48:20.248517 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:48:20.248520 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:48:20.248523 | orchestrator | 2026-03-19 00:48:20.248526 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:48:20.248530 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:48:20.248534 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:48:20.248537 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:48:20.248540 | orchestrator | 2026-03-19 00:48:20.248543 | orchestrator | 2026-03-19 00:48:20.248547 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:48:20.248550 | orchestrator | Thursday 19 March 2026 00:48:05 +0000 (0:00:03.152) 0:00:10.623 ******** 2026-03-19 00:48:20.248553 | orchestrator | =============================================================================== 2026-03-19 00:48:20.248556 | orchestrator | memcached : Restart memcached container --------------------------------- 3.15s 2026-03-19 00:48:20.248559 | orchestrator | memcached : Check memcached container ----------------------------------- 2.05s 2026-03-19 00:48:20.248563 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.86s 2026-03-19 00:48:20.248566 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.69s 2026-03-19 00:48:20.248569 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.74s 2026-03-19 00:48:20.248572 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2026-03-19 00:48:20.248575 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2026-03-19 00:48:20.248578 | orchestrator | 2026-03-19 00:48:20.248582 | orchestrator | 2026-03-19 00:48:20.248585 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 00:48:20.248588 | orchestrator | 2026-03-19 00:48:20.248591 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 00:48:20.248598 | orchestrator | Thursday 19 March 2026 00:47:55 +0000 (0:00:00.449) 0:00:00.449 ******** 2026-03-19 00:48:20.248601 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:48:20.248604 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:48:20.248607 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:48:20.248611 | orchestrator | 2026-03-19 00:48:20.248614 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 00:48:20.248623 | orchestrator | Thursday 19 March 2026 00:47:55 +0000 (0:00:00.383) 0:00:00.833 ******** 2026-03-19 00:48:20.248627 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-19 00:48:20.248630 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-19 00:48:20.248633 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-19 00:48:20.248636 | orchestrator | 2026-03-19 00:48:20.248639 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-19 00:48:20.248643 | orchestrator | 2026-03-19 00:48:20.248646 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-19 00:48:20.248649 | orchestrator | Thursday 19 March 2026 00:47:56 +0000 (0:00:00.568) 0:00:01.402 ******** 2026-03-19 00:48:20.248652 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:48:20.248655 | orchestrator | 2026-03-19 00:48:20.248659 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-19 00:48:20.248662 | orchestrator | Thursday 19 March 2026 00:47:57 +0000 (0:00:00.771) 0:00:02.173 ******** 2026-03-19 00:48:20.248667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248698 | orchestrator | 2026-03-19 00:48:20.248702 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-19 00:48:20.248705 | orchestrator | Thursday 19 March 2026 00:47:59 +0000 (0:00:02.263) 0:00:04.436 ******** 2026-03-19 00:48:20.248708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248735 | orchestrator | 2026-03-19 00:48:20.248739 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-19 00:48:20.248742 | orchestrator | Thursday 19 March 2026 00:48:01 +0000 (0:00:02.423) 0:00:06.860 ******** 2026-03-19 00:48:20.248745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248769 | orchestrator | 2026-03-19 00:48:20.248774 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-19 00:48:20.248778 | orchestrator | Thursday 19 March 2026 00:48:04 +0000 (0:00:02.338) 0:00:09.199 ******** 2026-03-19 00:48:20.248781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 00:48:20.248810 | orchestrator | 2026-03-19 00:48:20.248813 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-19 00:48:20.248817 | orchestrator | Thursday 19 March 2026 00:48:05 +0000 (0:00:01.545) 0:00:10.745 ******** 2026-03-19 00:48:20.248820 | orchestrator | 2026-03-19 00:48:20.248823 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-19 00:48:20.248828 | orchestrator | Thursday 19 March 2026 00:48:05 +0000 (0:00:00.317) 0:00:11.062 ******** 2026-03-19 00:48:20.248832 | orchestrator | 2026-03-19 00:48:20.248835 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-19 00:48:20.248838 | orchestrator | Thursday 19 March 2026 00:48:06 +0000 (0:00:00.123) 0:00:11.186 ******** 2026-03-19 00:48:20.248841 | orchestrator | 2026-03-19 00:48:20.248844 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-19 00:48:20.248847 | orchestrator | Thursday 19 March 2026 00:48:06 +0000 (0:00:00.083) 0:00:11.269 ******** 2026-03-19 00:48:20.248851 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:48:20.248854 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:48:20.248857 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:48:20.248860 | orchestrator | 2026-03-19 00:48:20.248863 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-19 00:48:20.248867 | orchestrator | Thursday 19 March 2026 00:48:14 +0000 (0:00:08.240) 0:00:19.510 ******** 2026-03-19 00:48:20.248871 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:48:20.248875 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:48:20.248878 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:48:20.248882 | orchestrator | 2026-03-19 00:48:20.248885 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:48:20.248889 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:48:20.248893 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:48:20.248897 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:48:20.248900 | orchestrator | 2026-03-19 00:48:20.248904 | orchestrator | 2026-03-19 00:48:20.248907 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:48:20.248911 | orchestrator | Thursday 19 March 2026 00:48:18 +0000 (0:00:03.906) 0:00:23.417 ******** 2026-03-19 00:48:20.248916 | orchestrator | =============================================================================== 2026-03-19 00:48:20.248922 | orchestrator | redis : Restart redis container ----------------------------------------- 8.24s 2026-03-19 00:48:20.248926 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.91s 2026-03-19 00:48:20.248929 | orchestrator | redis : Copying over default config.json files -------------------------- 2.42s 2026-03-19 00:48:20.248933 | orchestrator | redis : Copying over redis config files --------------------------------- 2.34s 2026-03-19 00:48:20.248936 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.26s 2026-03-19 00:48:20.248940 | orchestrator | redis : Check redis containers ------------------------------------------ 1.55s 2026-03-19 00:48:20.248944 | orchestrator | redis : include_tasks --------------------------------------------------- 0.77s 2026-03-19 00:48:20.248947 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-03-19 00:48:20.248951 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.52s 2026-03-19 00:48:20.248954 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-03-19 00:48:20.248958 | orchestrator | 2026-03-19 00:48:20 | INFO  | Task 0756b3d3-d4b3-41bf-b69d-d1a8511b51a9 is in state SUCCESS 2026-03-19 00:48:20.249182 | orchestrator | 2026-03-19 00:48:20 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:20.249208 | orchestrator | 2026-03-19 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:23.279570 | orchestrator | 2026-03-19 00:48:23 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:23.280329 | orchestrator | 2026-03-19 00:48:23 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:23.281232 | orchestrator | 2026-03-19 00:48:23 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:23.282002 | orchestrator | 2026-03-19 00:48:23 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:23.283007 | orchestrator | 2026-03-19 00:48:23 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:23.283177 | orchestrator | 2026-03-19 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:26.328673 | orchestrator | 2026-03-19 00:48:26 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:26.329931 | orchestrator | 2026-03-19 00:48:26 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:26.332134 | orchestrator | 2026-03-19 00:48:26 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:26.332222 | orchestrator | 2026-03-19 00:48:26 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:26.332233 | orchestrator | 2026-03-19 00:48:26 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:26.332241 | orchestrator | 2026-03-19 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:29.392509 | orchestrator | 2026-03-19 00:48:29 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:29.392630 | orchestrator | 2026-03-19 00:48:29 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:29.392661 | orchestrator | 2026-03-19 00:48:29 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:29.392688 | orchestrator | 2026-03-19 00:48:29 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:29.392714 | orchestrator | 2026-03-19 00:48:29 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:29.392752 | orchestrator | 2026-03-19 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:32.441535 | orchestrator | 2026-03-19 00:48:32 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:32.442875 | orchestrator | 2026-03-19 00:48:32 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:32.443176 | orchestrator | 2026-03-19 00:48:32 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:32.444332 | orchestrator | 2026-03-19 00:48:32 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:32.445378 | orchestrator | 2026-03-19 00:48:32 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:32.445406 | orchestrator | 2026-03-19 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:35.509733 | orchestrator | 2026-03-19 00:48:35 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:35.510181 | orchestrator | 2026-03-19 00:48:35 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:35.511038 | orchestrator | 2026-03-19 00:48:35 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:35.512126 | orchestrator | 2026-03-19 00:48:35 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:35.514847 | orchestrator | 2026-03-19 00:48:35 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:35.514909 | orchestrator | 2026-03-19 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:38.590114 | orchestrator | 2026-03-19 00:48:38 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:38.590181 | orchestrator | 2026-03-19 00:48:38 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:38.590190 | orchestrator | 2026-03-19 00:48:38 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:38.590194 | orchestrator | 2026-03-19 00:48:38 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:38.590199 | orchestrator | 2026-03-19 00:48:38 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:38.590205 | orchestrator | 2026-03-19 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:41.603231 | orchestrator | 2026-03-19 00:48:41 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:41.603332 | orchestrator | 2026-03-19 00:48:41 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:41.603631 | orchestrator | 2026-03-19 00:48:41 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:41.604252 | orchestrator | 2026-03-19 00:48:41 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:41.607392 | orchestrator | 2026-03-19 00:48:41 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:41.607437 | orchestrator | 2026-03-19 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:44.638234 | orchestrator | 2026-03-19 00:48:44 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:44.638723 | orchestrator | 2026-03-19 00:48:44 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:44.644220 | orchestrator | 2026-03-19 00:48:44 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:44.646729 | orchestrator | 2026-03-19 00:48:44 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:44.649144 | orchestrator | 2026-03-19 00:48:44 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:44.649473 | orchestrator | 2026-03-19 00:48:44 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:47.686155 | orchestrator | 2026-03-19 00:48:47 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:47.686219 | orchestrator | 2026-03-19 00:48:47 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:47.689416 | orchestrator | 2026-03-19 00:48:47 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:47.689857 | orchestrator | 2026-03-19 00:48:47 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:47.690604 | orchestrator | 2026-03-19 00:48:47 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:47.690676 | orchestrator | 2026-03-19 00:48:47 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:50.763820 | orchestrator | 2026-03-19 00:48:50 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:50.763882 | orchestrator | 2026-03-19 00:48:50 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:50.763891 | orchestrator | 2026-03-19 00:48:50 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:50.763897 | orchestrator | 2026-03-19 00:48:50 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:50.763904 | orchestrator | 2026-03-19 00:48:50 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:50.763911 | orchestrator | 2026-03-19 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:53.784951 | orchestrator | 2026-03-19 00:48:53 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state STARTED 2026-03-19 00:48:53.785228 | orchestrator | 2026-03-19 00:48:53 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:53.787524 | orchestrator | 2026-03-19 00:48:53 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:53.788625 | orchestrator | 2026-03-19 00:48:53 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:53.790701 | orchestrator | 2026-03-19 00:48:53 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:53.790772 | orchestrator | 2026-03-19 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:56.824405 | orchestrator | 2026-03-19 00:48:56.824516 | orchestrator | 2026-03-19 00:48:56.824528 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 00:48:56.824535 | orchestrator | 2026-03-19 00:48:56.824542 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 00:48:56.824549 | orchestrator | Thursday 19 March 2026 00:47:54 +0000 (0:00:00.364) 0:00:00.364 ******** 2026-03-19 00:48:56.824606 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:48:56.824614 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:48:56.824621 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:48:56.824628 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:48:56.824635 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:48:56.824712 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:48:56.824717 | orchestrator | 2026-03-19 00:48:56.824722 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 00:48:56.824726 | orchestrator | Thursday 19 March 2026 00:47:55 +0000 (0:00:00.662) 0:00:01.026 ******** 2026-03-19 00:48:56.824730 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 00:48:56.824748 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 00:48:56.824752 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 00:48:56.824756 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 00:48:56.824760 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 00:48:56.824764 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 00:48:56.824768 | orchestrator | 2026-03-19 00:48:56.824772 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-19 00:48:56.824775 | orchestrator | 2026-03-19 00:48:56.824779 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-19 00:48:56.824783 | orchestrator | Thursday 19 March 2026 00:47:56 +0000 (0:00:00.958) 0:00:01.984 ******** 2026-03-19 00:48:56.824797 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:48:56.824802 | orchestrator | 2026-03-19 00:48:56.824806 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-19 00:48:56.824810 | orchestrator | Thursday 19 March 2026 00:47:58 +0000 (0:00:01.663) 0:00:03.648 ******** 2026-03-19 00:48:56.824814 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-19 00:48:56.824818 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-19 00:48:56.824822 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-19 00:48:56.824826 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-19 00:48:56.824829 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-19 00:48:56.824833 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-19 00:48:56.824837 | orchestrator | 2026-03-19 00:48:56.824841 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-19 00:48:56.824845 | orchestrator | Thursday 19 March 2026 00:48:00 +0000 (0:00:01.973) 0:00:05.621 ******** 2026-03-19 00:48:56.824849 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-19 00:48:56.824852 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-19 00:48:56.824856 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-19 00:48:56.824860 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-19 00:48:56.824864 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-19 00:48:56.824867 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-19 00:48:56.824871 | orchestrator | 2026-03-19 00:48:56.824875 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-19 00:48:56.824879 | orchestrator | Thursday 19 March 2026 00:48:01 +0000 (0:00:01.772) 0:00:07.394 ******** 2026-03-19 00:48:56.824883 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-19 00:48:56.824886 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:48:56.824890 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-19 00:48:56.824894 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:48:56.824898 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-19 00:48:56.824902 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:48:56.824905 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-19 00:48:56.824909 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:48:56.824913 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-19 00:48:56.824916 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:48:56.824920 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-19 00:48:56.824924 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:48:56.824928 | orchestrator | 2026-03-19 00:48:56.824932 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-19 00:48:56.824939 | orchestrator | Thursday 19 March 2026 00:48:03 +0000 (0:00:01.124) 0:00:08.518 ******** 2026-03-19 00:48:56.824945 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:48:56.824949 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:48:56.824953 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:48:56.824956 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:48:56.824960 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:48:56.824964 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:48:56.824968 | orchestrator | 2026-03-19 00:48:56.824971 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-19 00:48:56.824975 | orchestrator | Thursday 19 March 2026 00:48:03 +0000 (0:00:00.604) 0:00:09.123 ******** 2026-03-19 00:48:56.824989 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.824995 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.824999 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825017 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825026 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825031 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825072 | orchestrator | 2026-03-19 00:48:56.825079 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-19 00:48:56.825085 | orchestrator | Thursday 19 March 2026 00:48:05 +0000 (0:00:01.593) 0:00:10.716 ******** 2026-03-19 00:48:56.825092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825099 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825156 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825163 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825170 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825211 | orchestrator | 2026-03-19 00:48:56.825218 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-19 00:48:56.825225 | orchestrator | Thursday 19 March 2026 00:48:07 +0000 (0:00:02.616) 0:00:13.333 ******** 2026-03-19 00:48:56.825229 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:48:56.825233 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:48:56.825237 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:48:56.825241 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:48:56.825246 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:48:56.825252 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:48:56.825261 | orchestrator | 2026-03-19 00:48:56.825335 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-19 00:48:56.825343 | orchestrator | Thursday 19 March 2026 00:48:08 +0000 (0:00:00.684) 0:00:14.018 ******** 2026-03-19 00:48:56.825349 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825354 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825367 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825394 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825406 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 00:48:56.825431 | orchestrator | 2026-03-19 00:48:56.825435 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 00:48:56.825440 | orchestrator | Thursday 19 March 2026 00:48:10 +0000 (0:00:02.227) 0:00:16.245 ******** 2026-03-19 00:48:56.825444 | orchestrator | 2026-03-19 00:48:56.825449 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 00:48:56.825455 | orchestrator | Thursday 19 March 2026 00:48:11 +0000 (0:00:00.289) 0:00:16.534 ******** 2026-03-19 00:48:56.825459 | orchestrator | 2026-03-19 00:48:56.825464 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 00:48:56.825468 | orchestrator | Thursday 19 March 2026 00:48:11 +0000 (0:00:00.160) 0:00:16.695 ******** 2026-03-19 00:48:56.825472 | orchestrator | 2026-03-19 00:48:56.825477 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 00:48:56.825481 | orchestrator | Thursday 19 March 2026 00:48:11 +0000 (0:00:00.213) 0:00:16.908 ******** 2026-03-19 00:48:56.825485 | orchestrator | 2026-03-19 00:48:56.825490 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 00:48:56.825494 | orchestrator | Thursday 19 March 2026 00:48:11 +0000 (0:00:00.399) 0:00:17.308 ******** 2026-03-19 00:48:56.825498 | orchestrator | 2026-03-19 00:48:56.825502 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 00:48:56.825507 | orchestrator | Thursday 19 March 2026 00:48:12 +0000 (0:00:00.131) 0:00:17.439 ******** 2026-03-19 00:48:56.825511 | orchestrator | 2026-03-19 00:48:56.825515 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-19 00:48:56.825519 | orchestrator | Thursday 19 March 2026 00:48:12 +0000 (0:00:00.149) 0:00:17.589 ******** 2026-03-19 00:48:56.825524 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:48:56.825528 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:48:56.825533 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:48:56.825537 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:48:56.825541 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:48:56.825545 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:48:56.825550 | orchestrator | 2026-03-19 00:48:56.825554 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-19 00:48:56.825558 | orchestrator | Thursday 19 March 2026 00:48:21 +0000 (0:00:09.315) 0:00:26.905 ******** 2026-03-19 00:48:56.825562 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:48:56.825567 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:48:56.825571 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:48:56.825575 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:48:56.825579 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:48:56.825584 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:48:56.825588 | orchestrator | 2026-03-19 00:48:56.825592 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-19 00:48:56.825597 | orchestrator | Thursday 19 March 2026 00:48:22 +0000 (0:00:01.301) 0:00:28.206 ******** 2026-03-19 00:48:56.825601 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:48:56.825605 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:48:56.825610 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:48:56.825614 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:48:56.825618 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:48:56.825623 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:48:56.825627 | orchestrator | 2026-03-19 00:48:56.825631 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-19 00:48:56.825636 | orchestrator | Thursday 19 March 2026 00:48:33 +0000 (0:00:10.613) 0:00:38.820 ******** 2026-03-19 00:48:56.825642 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-19 00:48:56.825646 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-19 00:48:56.825651 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-19 00:48:56.825655 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-19 00:48:56.825660 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-19 00:48:56.825667 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:48:56 | INFO  | Task ec92ecf9-588f-4042-8635-7fbaa79a865a is in state SUCCESS 2026-03-19 00:48:56.825674 | orchestrator | => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-19 00:48:56.825678 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-19 00:48:56.825683 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-19 00:48:56.825687 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-19 00:48:56.825692 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-19 00:48:56.825696 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-19 00:48:56.825700 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-19 00:48:56.825705 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 00:48:56.825709 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 00:48:56.825716 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 00:48:56.825722 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 00:48:56.825728 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 00:48:56.825734 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 00:48:56.825740 | orchestrator | 2026-03-19 00:48:56.825753 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-19 00:48:56.825892 | orchestrator | Thursday 19 March 2026 00:48:40 +0000 (0:00:07.270) 0:00:46.092 ******** 2026-03-19 00:48:56.825898 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-19 00:48:56.825902 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:48:56.825906 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-19 00:48:56.825909 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:48:56.825913 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-19 00:48:56.825917 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:48:56.825921 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-19 00:48:56.825924 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-19 00:48:56.825928 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-19 00:48:56.825932 | orchestrator | 2026-03-19 00:48:56.825936 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-19 00:48:56.825939 | orchestrator | Thursday 19 March 2026 00:48:43 +0000 (0:00:02.713) 0:00:48.805 ******** 2026-03-19 00:48:56.825944 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-19 00:48:56.825950 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:48:56.825975 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-19 00:48:56.825983 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:48:56.825989 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-19 00:48:56.825996 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:48:56.826002 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-19 00:48:56.826008 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-19 00:48:56.826094 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-19 00:48:56.826101 | orchestrator | 2026-03-19 00:48:56.826105 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-19 00:48:56.826115 | orchestrator | Thursday 19 March 2026 00:48:47 +0000 (0:00:03.632) 0:00:52.438 ******** 2026-03-19 00:48:56.826118 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:48:56.826122 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:48:56.826126 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:48:56.826130 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:48:56.826135 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:48:56.826142 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:48:56.826148 | orchestrator | 2026-03-19 00:48:56.826155 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:48:56.826165 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 00:48:56.826172 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 00:48:56.826178 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 00:48:56.826185 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 00:48:56.826198 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 00:48:56.826206 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 00:48:56.826213 | orchestrator | 2026-03-19 00:48:56.826219 | orchestrator | 2026-03-19 00:48:56.826226 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:48:56.826230 | orchestrator | Thursday 19 March 2026 00:48:55 +0000 (0:00:08.358) 0:01:00.796 ******** 2026-03-19 00:48:56.826234 | orchestrator | =============================================================================== 2026-03-19 00:48:56.826241 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.97s 2026-03-19 00:48:56.826249 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.32s 2026-03-19 00:48:56.826257 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.27s 2026-03-19 00:48:56.826263 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.63s 2026-03-19 00:48:56.826282 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.71s 2026-03-19 00:48:56.826287 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.62s 2026-03-19 00:48:56.826293 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.23s 2026-03-19 00:48:56.826300 | orchestrator | module-load : Load modules ---------------------------------------------- 1.97s 2026-03-19 00:48:56.826305 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.77s 2026-03-19 00:48:56.826309 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.66s 2026-03-19 00:48:56.826312 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.59s 2026-03-19 00:48:56.826316 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.34s 2026-03-19 00:48:56.826320 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.30s 2026-03-19 00:48:56.826323 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.12s 2026-03-19 00:48:56.826327 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.96s 2026-03-19 00:48:56.826331 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.68s 2026-03-19 00:48:56.826335 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.66s 2026-03-19 00:48:56.826339 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.60s 2026-03-19 00:48:56.826347 | orchestrator | 2026-03-19 00:48:56 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:56.826350 | orchestrator | 2026-03-19 00:48:56 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:56.826354 | orchestrator | 2026-03-19 00:48:56 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:56.826358 | orchestrator | 2026-03-19 00:48:56 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:48:56.826362 | orchestrator | 2026-03-19 00:48:56 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:56.826366 | orchestrator | 2026-03-19 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:48:59.857016 | orchestrator | 2026-03-19 00:48:59 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:48:59.859286 | orchestrator | 2026-03-19 00:48:59 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:48:59.861385 | orchestrator | 2026-03-19 00:48:59 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:48:59.863498 | orchestrator | 2026-03-19 00:48:59 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:48:59.865303 | orchestrator | 2026-03-19 00:48:59 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:48:59.865525 | orchestrator | 2026-03-19 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:02.901058 | orchestrator | 2026-03-19 00:49:02 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:02.902045 | orchestrator | 2026-03-19 00:49:02 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:02.905503 | orchestrator | 2026-03-19 00:49:02 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:02.906947 | orchestrator | 2026-03-19 00:49:02 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:02.907684 | orchestrator | 2026-03-19 00:49:02 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:02.907720 | orchestrator | 2026-03-19 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:05.949962 | orchestrator | 2026-03-19 00:49:05 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:05.950376 | orchestrator | 2026-03-19 00:49:05 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:05.950985 | orchestrator | 2026-03-19 00:49:05 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:05.954579 | orchestrator | 2026-03-19 00:49:05 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:05.955061 | orchestrator | 2026-03-19 00:49:05 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:05.955090 | orchestrator | 2026-03-19 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:08.987326 | orchestrator | 2026-03-19 00:49:08 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:08.989324 | orchestrator | 2026-03-19 00:49:08 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:08.993190 | orchestrator | 2026-03-19 00:49:08 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:08.996798 | orchestrator | 2026-03-19 00:49:08 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:08.999844 | orchestrator | 2026-03-19 00:49:08 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:08.999903 | orchestrator | 2026-03-19 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:12.081279 | orchestrator | 2026-03-19 00:49:12 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:12.081710 | orchestrator | 2026-03-19 00:49:12 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:12.084012 | orchestrator | 2026-03-19 00:49:12 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:12.087203 | orchestrator | 2026-03-19 00:49:12 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:12.087680 | orchestrator | 2026-03-19 00:49:12 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:12.087705 | orchestrator | 2026-03-19 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:15.120219 | orchestrator | 2026-03-19 00:49:15 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:15.120869 | orchestrator | 2026-03-19 00:49:15 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:15.121750 | orchestrator | 2026-03-19 00:49:15 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:15.122577 | orchestrator | 2026-03-19 00:49:15 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:15.123547 | orchestrator | 2026-03-19 00:49:15 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:15.123581 | orchestrator | 2026-03-19 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:18.162230 | orchestrator | 2026-03-19 00:49:18 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:18.167961 | orchestrator | 2026-03-19 00:49:18 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:18.171123 | orchestrator | 2026-03-19 00:49:18 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:18.172003 | orchestrator | 2026-03-19 00:49:18 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:18.173474 | orchestrator | 2026-03-19 00:49:18 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:18.173559 | orchestrator | 2026-03-19 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:21.218633 | orchestrator | 2026-03-19 00:49:21 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:21.219073 | orchestrator | 2026-03-19 00:49:21 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:21.219542 | orchestrator | 2026-03-19 00:49:21 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:21.219860 | orchestrator | 2026-03-19 00:49:21 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:21.221115 | orchestrator | 2026-03-19 00:49:21 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:21.221143 | orchestrator | 2026-03-19 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:24.255216 | orchestrator | 2026-03-19 00:49:24 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:24.255468 | orchestrator | 2026-03-19 00:49:24 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:24.256016 | orchestrator | 2026-03-19 00:49:24 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:24.256839 | orchestrator | 2026-03-19 00:49:24 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:24.257341 | orchestrator | 2026-03-19 00:49:24 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:24.257399 | orchestrator | 2026-03-19 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:27.293209 | orchestrator | 2026-03-19 00:49:27 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:27.293460 | orchestrator | 2026-03-19 00:49:27 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:27.294568 | orchestrator | 2026-03-19 00:49:27 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:27.295562 | orchestrator | 2026-03-19 00:49:27 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:27.296551 | orchestrator | 2026-03-19 00:49:27 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:27.296586 | orchestrator | 2026-03-19 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:30.328066 | orchestrator | 2026-03-19 00:49:30 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:30.328736 | orchestrator | 2026-03-19 00:49:30 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:30.329468 | orchestrator | 2026-03-19 00:49:30 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:30.329751 | orchestrator | 2026-03-19 00:49:30 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:30.332396 | orchestrator | 2026-03-19 00:49:30 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:30.332448 | orchestrator | 2026-03-19 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:33.368194 | orchestrator | 2026-03-19 00:49:33 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:33.370895 | orchestrator | 2026-03-19 00:49:33 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:33.371428 | orchestrator | 2026-03-19 00:49:33 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:33.372174 | orchestrator | 2026-03-19 00:49:33 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:33.373888 | orchestrator | 2026-03-19 00:49:33 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:33.373925 | orchestrator | 2026-03-19 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:36.412556 | orchestrator | 2026-03-19 00:49:36 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:36.413118 | orchestrator | 2026-03-19 00:49:36 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:36.413676 | orchestrator | 2026-03-19 00:49:36 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:36.416205 | orchestrator | 2026-03-19 00:49:36 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:36.417078 | orchestrator | 2026-03-19 00:49:36 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:36.417107 | orchestrator | 2026-03-19 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:39.514973 | orchestrator | 2026-03-19 00:49:39 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:39.515039 | orchestrator | 2026-03-19 00:49:39 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:39.515044 | orchestrator | 2026-03-19 00:49:39 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:39.515047 | orchestrator | 2026-03-19 00:49:39 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:39.515050 | orchestrator | 2026-03-19 00:49:39 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:39.515054 | orchestrator | 2026-03-19 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:42.576316 | orchestrator | 2026-03-19 00:49:42 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:42.576369 | orchestrator | 2026-03-19 00:49:42 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:42.577638 | orchestrator | 2026-03-19 00:49:42 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:42.579803 | orchestrator | 2026-03-19 00:49:42 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:42.581498 | orchestrator | 2026-03-19 00:49:42 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:42.581711 | orchestrator | 2026-03-19 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:45.668013 | orchestrator | 2026-03-19 00:49:45 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:45.672974 | orchestrator | 2026-03-19 00:49:45 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:45.673458 | orchestrator | 2026-03-19 00:49:45 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:45.676489 | orchestrator | 2026-03-19 00:49:45 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:45.676940 | orchestrator | 2026-03-19 00:49:45 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:45.676954 | orchestrator | 2026-03-19 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:48.756045 | orchestrator | 2026-03-19 00:49:48 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:48.756110 | orchestrator | 2026-03-19 00:49:48 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:48.756423 | orchestrator | 2026-03-19 00:49:48 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:48.757113 | orchestrator | 2026-03-19 00:49:48 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:48.758044 | orchestrator | 2026-03-19 00:49:48 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:48.758103 | orchestrator | 2026-03-19 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:51.789631 | orchestrator | 2026-03-19 00:49:51 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:51.792416 | orchestrator | 2026-03-19 00:49:51 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:51.792844 | orchestrator | 2026-03-19 00:49:51 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:51.795747 | orchestrator | 2026-03-19 00:49:51 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:51.796150 | orchestrator | 2026-03-19 00:49:51 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:51.796248 | orchestrator | 2026-03-19 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:55.122078 | orchestrator | 2026-03-19 00:49:55 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:55.123187 | orchestrator | 2026-03-19 00:49:55 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:55.124173 | orchestrator | 2026-03-19 00:49:55 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:55.124775 | orchestrator | 2026-03-19 00:49:55 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:55.126300 | orchestrator | 2026-03-19 00:49:55 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:55.127815 | orchestrator | 2026-03-19 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:49:58.403554 | orchestrator | 2026-03-19 00:49:58 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:49:58.403852 | orchestrator | 2026-03-19 00:49:58 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:49:58.404778 | orchestrator | 2026-03-19 00:49:58 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:49:58.406084 | orchestrator | 2026-03-19 00:49:58 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:49:58.406986 | orchestrator | 2026-03-19 00:49:58 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:49:58.407016 | orchestrator | 2026-03-19 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:01.472649 | orchestrator | 2026-03-19 00:50:01 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:50:01.473055 | orchestrator | 2026-03-19 00:50:01 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:50:01.474054 | orchestrator | 2026-03-19 00:50:01 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:01.474967 | orchestrator | 2026-03-19 00:50:01 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:01.477150 | orchestrator | 2026-03-19 00:50:01 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:01.477188 | orchestrator | 2026-03-19 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:04.529227 | orchestrator | 2026-03-19 00:50:04 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:50:04.529317 | orchestrator | 2026-03-19 00:50:04 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state STARTED 2026-03-19 00:50:04.529326 | orchestrator | 2026-03-19 00:50:04 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:04.529519 | orchestrator | 2026-03-19 00:50:04 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:04.530767 | orchestrator | 2026-03-19 00:50:04 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:04.530814 | orchestrator | 2026-03-19 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:07.578559 | orchestrator | 2026-03-19 00:50:07 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:50:07.583108 | orchestrator | 2026-03-19 00:50:07 | INFO  | Task c799f326-1198-4300-b5b6-a72bf1c4f1b1 is in state SUCCESS 2026-03-19 00:50:07.583916 | orchestrator | 2026-03-19 00:50:07.583956 | orchestrator | 2026-03-19 00:50:07.583962 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-19 00:50:07.583991 | orchestrator | 2026-03-19 00:50:07.583997 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-19 00:50:07.584002 | orchestrator | Thursday 19 March 2026 00:45:40 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-03-19 00:50:07.584007 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:50:07.584013 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:50:07.584017 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:50:07.584021 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.584026 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.584030 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.584035 | orchestrator | 2026-03-19 00:50:07.584066 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-19 00:50:07.584074 | orchestrator | Thursday 19 March 2026 00:45:41 +0000 (0:00:00.680) 0:00:00.950 ******** 2026-03-19 00:50:07.584094 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.584109 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.584115 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.584121 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.584127 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.584161 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.584168 | orchestrator | 2026-03-19 00:50:07.584174 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-19 00:50:07.584181 | orchestrator | Thursday 19 March 2026 00:45:42 +0000 (0:00:00.763) 0:00:01.713 ******** 2026-03-19 00:50:07.584188 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.584194 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.584253 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.584262 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.584268 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.584274 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.584280 | orchestrator | 2026-03-19 00:50:07.584286 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-19 00:50:07.584293 | orchestrator | Thursday 19 March 2026 00:45:42 +0000 (0:00:00.590) 0:00:02.304 ******** 2026-03-19 00:50:07.584299 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.584305 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:50:07.584311 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:50:07.584318 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:50:07.584349 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:07.584357 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:07.584363 | orchestrator | 2026-03-19 00:50:07.584369 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-19 00:50:07.584392 | orchestrator | Thursday 19 March 2026 00:45:45 +0000 (0:00:02.519) 0:00:04.824 ******** 2026-03-19 00:50:07.584399 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:50:07.584405 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:50:07.584412 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:50:07.584418 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.584425 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:07.584432 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:07.584438 | orchestrator | 2026-03-19 00:50:07.584445 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-19 00:50:07.584451 | orchestrator | Thursday 19 March 2026 00:45:46 +0000 (0:00:00.989) 0:00:05.813 ******** 2026-03-19 00:50:07.584458 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:50:07.584477 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:50:07.584484 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:50:07.584490 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.584497 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:07.584504 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:07.584510 | orchestrator | 2026-03-19 00:50:07.584516 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-19 00:50:07.584533 | orchestrator | Thursday 19 March 2026 00:45:47 +0000 (0:00:01.393) 0:00:07.207 ******** 2026-03-19 00:50:07.584540 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.584547 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.584553 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.584566 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.584573 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.584579 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.584586 | orchestrator | 2026-03-19 00:50:07.584592 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-19 00:50:07.584596 | orchestrator | Thursday 19 March 2026 00:45:49 +0000 (0:00:01.165) 0:00:08.372 ******** 2026-03-19 00:50:07.584600 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.584603 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.584607 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.584611 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.584615 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.584618 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.584622 | orchestrator | 2026-03-19 00:50:07.584626 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-19 00:50:07.584630 | orchestrator | Thursday 19 March 2026 00:45:49 +0000 (0:00:00.773) 0:00:09.146 ******** 2026-03-19 00:50:07.584634 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 00:50:07.584638 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 00:50:07.584641 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.584645 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 00:50:07.584649 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 00:50:07.584653 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.584656 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 00:50:07.584660 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 00:50:07.584664 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.584668 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 00:50:07.584685 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 00:50:07.584689 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.584693 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 00:50:07.584697 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 00:50:07.584701 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.584704 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 00:50:07.584708 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 00:50:07.584712 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.584716 | orchestrator | 2026-03-19 00:50:07.584719 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-19 00:50:07.584723 | orchestrator | Thursday 19 March 2026 00:45:50 +0000 (0:00:00.995) 0:00:10.141 ******** 2026-03-19 00:50:07.584727 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.584730 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.584734 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.584738 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.584741 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.584745 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.584749 | orchestrator | 2026-03-19 00:50:07.584753 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-19 00:50:07.584761 | orchestrator | Thursday 19 March 2026 00:45:52 +0000 (0:00:01.566) 0:00:11.708 ******** 2026-03-19 00:50:07.584771 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:50:07.584779 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:50:07.584784 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:50:07.584791 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.584797 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.584803 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.584810 | orchestrator | 2026-03-19 00:50:07.584816 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-19 00:50:07.584823 | orchestrator | Thursday 19 March 2026 00:45:53 +0000 (0:00:00.870) 0:00:12.579 ******** 2026-03-19 00:50:07.584829 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:07.584835 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:50:07.584841 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:50:07.584848 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:50:07.584854 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.584858 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:07.584861 | orchestrator | 2026-03-19 00:50:07.584871 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-19 00:50:07.584877 | orchestrator | Thursday 19 March 2026 00:45:59 +0000 (0:00:06.530) 0:00:19.109 ******** 2026-03-19 00:50:07.584883 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.584889 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.584895 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.584901 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.584907 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.584913 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.584919 | orchestrator | 2026-03-19 00:50:07.584924 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-19 00:50:07.584930 | orchestrator | Thursday 19 March 2026 00:46:01 +0000 (0:00:01.825) 0:00:20.935 ******** 2026-03-19 00:50:07.584937 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.584943 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.584949 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.584955 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.584961 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.584967 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.584974 | orchestrator | 2026-03-19 00:50:07.584980 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-19 00:50:07.584988 | orchestrator | Thursday 19 March 2026 00:46:04 +0000 (0:00:03.171) 0:00:24.106 ******** 2026-03-19 00:50:07.584994 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.585000 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.585006 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.585012 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.585019 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.585025 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.585031 | orchestrator | 2026-03-19 00:50:07.585038 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-19 00:50:07.585044 | orchestrator | Thursday 19 March 2026 00:46:05 +0000 (0:00:00.898) 0:00:25.005 ******** 2026-03-19 00:50:07.585051 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-19 00:50:07.585056 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-19 00:50:07.585060 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.585064 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-19 00:50:07.585068 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-19 00:50:07.585072 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.585075 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-19 00:50:07.585079 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-19 00:50:07.585083 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.585087 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-19 00:50:07.585095 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-19 00:50:07.585099 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.585102 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-19 00:50:07.585106 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-19 00:50:07.585110 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.585114 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-19 00:50:07.585118 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-19 00:50:07.585121 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.585125 | orchestrator | 2026-03-19 00:50:07.585129 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-19 00:50:07.585138 | orchestrator | Thursday 19 March 2026 00:46:06 +0000 (0:00:01.017) 0:00:26.022 ******** 2026-03-19 00:50:07.585142 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.585146 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.585150 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.585153 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.585157 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.585161 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.585165 | orchestrator | 2026-03-19 00:50:07.585169 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-19 00:50:07.585172 | orchestrator | Thursday 19 March 2026 00:46:07 +0000 (0:00:00.781) 0:00:26.804 ******** 2026-03-19 00:50:07.585176 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.585180 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.585184 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.585188 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.585191 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.585195 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.585199 | orchestrator | 2026-03-19 00:50:07.585224 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-19 00:50:07.585229 | orchestrator | 2026-03-19 00:50:07.585235 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-19 00:50:07.585241 | orchestrator | Thursday 19 March 2026 00:46:08 +0000 (0:00:01.161) 0:00:27.965 ******** 2026-03-19 00:50:07.585246 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.585254 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.585262 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.585268 | orchestrator | 2026-03-19 00:50:07.585274 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-19 00:50:07.585280 | orchestrator | Thursday 19 March 2026 00:46:09 +0000 (0:00:00.982) 0:00:28.947 ******** 2026-03-19 00:50:07.585286 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.585293 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.585299 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.585304 | orchestrator | 2026-03-19 00:50:07.585311 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-19 00:50:07.585317 | orchestrator | Thursday 19 March 2026 00:46:11 +0000 (0:00:01.547) 0:00:30.495 ******** 2026-03-19 00:50:07.585323 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.585330 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.585336 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.585342 | orchestrator | 2026-03-19 00:50:07.585348 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-19 00:50:07.585361 | orchestrator | Thursday 19 March 2026 00:46:12 +0000 (0:00:00.967) 0:00:31.462 ******** 2026-03-19 00:50:07.585366 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.585373 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.585379 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.585385 | orchestrator | 2026-03-19 00:50:07.585391 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-19 00:50:07.585397 | orchestrator | Thursday 19 March 2026 00:46:13 +0000 (0:00:01.078) 0:00:32.541 ******** 2026-03-19 00:50:07.585410 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.585416 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.585422 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.585429 | orchestrator | 2026-03-19 00:50:07.585435 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-19 00:50:07.585441 | orchestrator | Thursday 19 March 2026 00:46:13 +0000 (0:00:00.489) 0:00:33.031 ******** 2026-03-19 00:50:07.585447 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.585453 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:07.585458 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:07.585465 | orchestrator | 2026-03-19 00:50:07.585471 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-19 00:50:07.585477 | orchestrator | Thursday 19 March 2026 00:46:14 +0000 (0:00:01.204) 0:00:34.235 ******** 2026-03-19 00:50:07.585483 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:07.585489 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.585494 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:07.585500 | orchestrator | 2026-03-19 00:50:07.585507 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-19 00:50:07.585513 | orchestrator | Thursday 19 March 2026 00:46:16 +0000 (0:00:01.751) 0:00:35.986 ******** 2026-03-19 00:50:07.585519 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:50:07.585525 | orchestrator | 2026-03-19 00:50:07.585531 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-19 00:50:07.585537 | orchestrator | Thursday 19 March 2026 00:46:17 +0000 (0:00:00.682) 0:00:36.669 ******** 2026-03-19 00:50:07.585544 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.585550 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.585556 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.585563 | orchestrator | 2026-03-19 00:50:07.585569 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-19 00:50:07.585576 | orchestrator | Thursday 19 March 2026 00:46:19 +0000 (0:00:02.495) 0:00:39.165 ******** 2026-03-19 00:50:07.585582 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.585588 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.585594 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.585601 | orchestrator | 2026-03-19 00:50:07.585608 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-19 00:50:07.585614 | orchestrator | Thursday 19 March 2026 00:46:20 +0000 (0:00:01.027) 0:00:40.192 ******** 2026-03-19 00:50:07.585620 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.585626 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.585633 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.585639 | orchestrator | 2026-03-19 00:50:07.585645 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-19 00:50:07.585651 | orchestrator | Thursday 19 March 2026 00:46:22 +0000 (0:00:01.312) 0:00:41.505 ******** 2026-03-19 00:50:07.585656 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.585662 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.585668 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.585674 | orchestrator | 2026-03-19 00:50:07.585680 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-19 00:50:07.585696 | orchestrator | Thursday 19 March 2026 00:46:23 +0000 (0:00:01.580) 0:00:43.086 ******** 2026-03-19 00:50:07.585703 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.585710 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.585718 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.585724 | orchestrator | 2026-03-19 00:50:07.585731 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-19 00:50:07.585737 | orchestrator | Thursday 19 March 2026 00:46:24 +0000 (0:00:00.591) 0:00:43.677 ******** 2026-03-19 00:50:07.585744 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.585752 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.585764 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.585770 | orchestrator | 2026-03-19 00:50:07.585776 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-19 00:50:07.585783 | orchestrator | Thursday 19 March 2026 00:46:24 +0000 (0:00:00.391) 0:00:44.068 ******** 2026-03-19 00:50:07.585790 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.585796 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:07.585802 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:07.585809 | orchestrator | 2026-03-19 00:50:07.585815 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-19 00:50:07.585821 | orchestrator | Thursday 19 March 2026 00:46:27 +0000 (0:00:02.380) 0:00:46.449 ******** 2026-03-19 00:50:07.585826 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.585832 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.585837 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.585843 | orchestrator | 2026-03-19 00:50:07.585849 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-19 00:50:07.585855 | orchestrator | Thursday 19 March 2026 00:46:29 +0000 (0:00:02.312) 0:00:48.762 ******** 2026-03-19 00:50:07.585861 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.585866 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.585872 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.585878 | orchestrator | 2026-03-19 00:50:07.585884 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-19 00:50:07.585890 | orchestrator | Thursday 19 March 2026 00:46:29 +0000 (0:00:00.387) 0:00:49.149 ******** 2026-03-19 00:50:07.585896 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-19 00:50:07.585909 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-19 00:50:07.585914 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-19 00:50:07.585920 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-19 00:50:07.585926 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-19 00:50:07.585932 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-19 00:50:07.585939 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-19 00:50:07.585945 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-19 00:50:07.585950 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-19 00:50:07.585956 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-19 00:50:07.585963 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-19 00:50:07.585969 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-19 00:50:07.585975 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.585981 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.585988 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.585994 | orchestrator | 2026-03-19 00:50:07.585999 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-19 00:50:07.586005 | orchestrator | Thursday 19 March 2026 00:47:13 +0000 (0:00:43.480) 0:01:32.629 ******** 2026-03-19 00:50:07.586079 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.586088 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.586095 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.586121 | orchestrator | 2026-03-19 00:50:07.586129 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-19 00:50:07.586137 | orchestrator | Thursday 19 March 2026 00:47:14 +0000 (0:00:00.741) 0:01:33.371 ******** 2026-03-19 00:50:07.586144 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.586152 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:07.586158 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:07.586165 | orchestrator | 2026-03-19 00:50:07.586171 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-19 00:50:07.586178 | orchestrator | Thursday 19 March 2026 00:47:15 +0000 (0:00:01.178) 0:01:34.549 ******** 2026-03-19 00:50:07.586185 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.586192 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:07.586198 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:07.586222 | orchestrator | 2026-03-19 00:50:07.586236 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-19 00:50:07.586243 | orchestrator | Thursday 19 March 2026 00:47:16 +0000 (0:00:01.741) 0:01:36.290 ******** 2026-03-19 00:50:07.586249 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:07.586255 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:07.586262 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.586268 | orchestrator | 2026-03-19 00:50:07.586274 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-19 00:50:07.586280 | orchestrator | Thursday 19 March 2026 00:47:41 +0000 (0:00:24.588) 0:02:00.878 ******** 2026-03-19 00:50:07.586287 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.586292 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.586299 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.586305 | orchestrator | 2026-03-19 00:50:07.586312 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-19 00:50:07.586318 | orchestrator | Thursday 19 March 2026 00:47:42 +0000 (0:00:00.642) 0:02:01.521 ******** 2026-03-19 00:50:07.586324 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.586329 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.586336 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.586342 | orchestrator | 2026-03-19 00:50:07.586348 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-19 00:50:07.586354 | orchestrator | Thursday 19 March 2026 00:47:43 +0000 (0:00:01.037) 0:02:02.559 ******** 2026-03-19 00:50:07.586360 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.586366 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:07.586372 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:07.586378 | orchestrator | 2026-03-19 00:50:07.586385 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-19 00:50:07.586391 | orchestrator | Thursday 19 March 2026 00:47:43 +0000 (0:00:00.724) 0:02:03.283 ******** 2026-03-19 00:50:07.586397 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.586403 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.586410 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.586416 | orchestrator | 2026-03-19 00:50:07.586423 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-19 00:50:07.586429 | orchestrator | Thursday 19 March 2026 00:47:44 +0000 (0:00:00.664) 0:02:03.948 ******** 2026-03-19 00:50:07.586435 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.586441 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.586448 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.586455 | orchestrator | 2026-03-19 00:50:07.586461 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-19 00:50:07.586475 | orchestrator | Thursday 19 March 2026 00:47:45 +0000 (0:00:00.438) 0:02:04.386 ******** 2026-03-19 00:50:07.586482 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.586495 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:07.586501 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:07.586507 | orchestrator | 2026-03-19 00:50:07.586513 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-19 00:50:07.586518 | orchestrator | Thursday 19 March 2026 00:47:45 +0000 (0:00:00.858) 0:02:05.245 ******** 2026-03-19 00:50:07.586524 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.586530 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:07.586536 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:07.586542 | orchestrator | 2026-03-19 00:50:07.586548 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-19 00:50:07.586554 | orchestrator | Thursday 19 March 2026 00:47:46 +0000 (0:00:00.606) 0:02:05.851 ******** 2026-03-19 00:50:07.586560 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.586566 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:07.586572 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:07.586578 | orchestrator | 2026-03-19 00:50:07.586584 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-19 00:50:07.586591 | orchestrator | Thursday 19 March 2026 00:47:47 +0000 (0:00:01.075) 0:02:06.926 ******** 2026-03-19 00:50:07.586598 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:07.586605 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:07.586611 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:07.586618 | orchestrator | 2026-03-19 00:50:07.586624 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-19 00:50:07.586630 | orchestrator | Thursday 19 March 2026 00:47:48 +0000 (0:00:00.939) 0:02:07.866 ******** 2026-03-19 00:50:07.586637 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.586644 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.586649 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.586655 | orchestrator | 2026-03-19 00:50:07.586661 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-19 00:50:07.586667 | orchestrator | Thursday 19 March 2026 00:47:49 +0000 (0:00:00.572) 0:02:08.439 ******** 2026-03-19 00:50:07.586674 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.586680 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.586686 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.586693 | orchestrator | 2026-03-19 00:50:07.586701 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-19 00:50:07.586707 | orchestrator | Thursday 19 March 2026 00:47:49 +0000 (0:00:00.338) 0:02:08.777 ******** 2026-03-19 00:50:07.586713 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.586719 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.586726 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.586731 | orchestrator | 2026-03-19 00:50:07.586737 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-19 00:50:07.586743 | orchestrator | Thursday 19 March 2026 00:47:50 +0000 (0:00:00.717) 0:02:09.495 ******** 2026-03-19 00:50:07.586749 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.586755 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.586761 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.586767 | orchestrator | 2026-03-19 00:50:07.586774 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-19 00:50:07.586781 | orchestrator | Thursday 19 March 2026 00:47:50 +0000 (0:00:00.705) 0:02:10.201 ******** 2026-03-19 00:50:07.586787 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-19 00:50:07.586800 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-19 00:50:07.586806 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-19 00:50:07.586812 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-19 00:50:07.586827 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-19 00:50:07.586833 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-19 00:50:07.586840 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-19 00:50:07.586847 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-19 00:50:07.586852 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-19 00:50:07.586858 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-19 00:50:07.586866 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-19 00:50:07.586873 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-19 00:50:07.586880 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-19 00:50:07.586886 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-19 00:50:07.586893 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-19 00:50:07.586899 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-19 00:50:07.586905 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-19 00:50:07.586911 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-19 00:50:07.586928 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-19 00:50:07.586932 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-19 00:50:07.586936 | orchestrator | 2026-03-19 00:50:07.586939 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-19 00:50:07.586943 | orchestrator | 2026-03-19 00:50:07.586947 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-19 00:50:07.586951 | orchestrator | Thursday 19 March 2026 00:47:54 +0000 (0:00:03.472) 0:02:13.674 ******** 2026-03-19 00:50:07.586955 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:50:07.586959 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:50:07.586963 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:50:07.586967 | orchestrator | 2026-03-19 00:50:07.586971 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-19 00:50:07.586975 | orchestrator | Thursday 19 March 2026 00:47:54 +0000 (0:00:00.293) 0:02:13.968 ******** 2026-03-19 00:50:07.586978 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:50:07.586982 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:50:07.586986 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:50:07.586990 | orchestrator | 2026-03-19 00:50:07.586995 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-19 00:50:07.587001 | orchestrator | Thursday 19 March 2026 00:47:55 +0000 (0:00:00.543) 0:02:14.512 ******** 2026-03-19 00:50:07.587007 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:50:07.587013 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:50:07.587024 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:50:07.587031 | orchestrator | 2026-03-19 00:50:07.587037 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-19 00:50:07.587043 | orchestrator | Thursday 19 March 2026 00:47:55 +0000 (0:00:00.471) 0:02:14.983 ******** 2026-03-19 00:50:07.587049 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:50:07.587055 | orchestrator | 2026-03-19 00:50:07.587061 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-19 00:50:07.587068 | orchestrator | Thursday 19 March 2026 00:47:56 +0000 (0:00:00.455) 0:02:15.439 ******** 2026-03-19 00:50:07.587082 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.587089 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.587095 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.587103 | orchestrator | 2026-03-19 00:50:07.587107 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-19 00:50:07.587111 | orchestrator | Thursday 19 March 2026 00:47:56 +0000 (0:00:00.286) 0:02:15.725 ******** 2026-03-19 00:50:07.587117 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.587123 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.587132 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.587142 | orchestrator | 2026-03-19 00:50:07.587147 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-19 00:50:07.587153 | orchestrator | Thursday 19 March 2026 00:47:56 +0000 (0:00:00.445) 0:02:16.171 ******** 2026-03-19 00:50:07.587159 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.587165 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.587171 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.587177 | orchestrator | 2026-03-19 00:50:07.587183 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-19 00:50:07.587190 | orchestrator | Thursday 19 March 2026 00:47:57 +0000 (0:00:00.346) 0:02:16.517 ******** 2026-03-19 00:50:07.587196 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:50:07.587251 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:50:07.587259 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:50:07.587266 | orchestrator | 2026-03-19 00:50:07.587283 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-19 00:50:07.587291 | orchestrator | Thursday 19 March 2026 00:47:57 +0000 (0:00:00.632) 0:02:17.149 ******** 2026-03-19 00:50:07.587297 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:50:07.587305 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:50:07.587311 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:50:07.587318 | orchestrator | 2026-03-19 00:50:07.587325 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-19 00:50:07.587331 | orchestrator | Thursday 19 March 2026 00:47:58 +0000 (0:00:01.102) 0:02:18.252 ******** 2026-03-19 00:50:07.587334 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:50:07.587338 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:50:07.587342 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:50:07.587346 | orchestrator | 2026-03-19 00:50:07.587351 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-19 00:50:07.587357 | orchestrator | Thursday 19 March 2026 00:48:00 +0000 (0:00:01.597) 0:02:19.850 ******** 2026-03-19 00:50:07.587366 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:50:07.587373 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:50:07.587380 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:50:07.587386 | orchestrator | 2026-03-19 00:50:07.587391 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-19 00:50:07.587397 | orchestrator | 2026-03-19 00:50:07.587403 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-19 00:50:07.587409 | orchestrator | Thursday 19 March 2026 00:48:11 +0000 (0:00:10.767) 0:02:30.618 ******** 2026-03-19 00:50:07.587414 | orchestrator | ok: [testbed-manager] 2026-03-19 00:50:07.587420 | orchestrator | 2026-03-19 00:50:07.587426 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-19 00:50:07.587432 | orchestrator | Thursday 19 March 2026 00:48:11 +0000 (0:00:00.695) 0:02:31.313 ******** 2026-03-19 00:50:07.587438 | orchestrator | changed: [testbed-manager] 2026-03-19 00:50:07.587444 | orchestrator | 2026-03-19 00:50:07.587449 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-19 00:50:07.587456 | orchestrator | Thursday 19 March 2026 00:48:12 +0000 (0:00:00.440) 0:02:31.753 ******** 2026-03-19 00:50:07.587461 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-19 00:50:07.587468 | orchestrator | 2026-03-19 00:50:07.587474 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-19 00:50:07.587488 | orchestrator | Thursday 19 March 2026 00:48:13 +0000 (0:00:01.227) 0:02:32.981 ******** 2026-03-19 00:50:07.587497 | orchestrator | changed: [testbed-manager] 2026-03-19 00:50:07.587501 | orchestrator | 2026-03-19 00:50:07.587504 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-19 00:50:07.587508 | orchestrator | Thursday 19 March 2026 00:48:14 +0000 (0:00:01.177) 0:02:34.159 ******** 2026-03-19 00:50:07.587512 | orchestrator | changed: [testbed-manager] 2026-03-19 00:50:07.587515 | orchestrator | 2026-03-19 00:50:07.587519 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-19 00:50:07.587523 | orchestrator | Thursday 19 March 2026 00:48:15 +0000 (0:00:00.521) 0:02:34.680 ******** 2026-03-19 00:50:07.587561 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-19 00:50:07.587566 | orchestrator | 2026-03-19 00:50:07.587569 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-19 00:50:07.587573 | orchestrator | Thursday 19 March 2026 00:48:17 +0000 (0:00:01.735) 0:02:36.415 ******** 2026-03-19 00:50:07.587577 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-19 00:50:07.587581 | orchestrator | 2026-03-19 00:50:07.587585 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-19 00:50:07.587589 | orchestrator | Thursday 19 March 2026 00:48:17 +0000 (0:00:00.889) 0:02:37.305 ******** 2026-03-19 00:50:07.587592 | orchestrator | changed: [testbed-manager] 2026-03-19 00:50:07.587596 | orchestrator | 2026-03-19 00:50:07.587600 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-19 00:50:07.587604 | orchestrator | Thursday 19 March 2026 00:48:18 +0000 (0:00:00.445) 0:02:37.750 ******** 2026-03-19 00:50:07.587608 | orchestrator | changed: [testbed-manager] 2026-03-19 00:50:07.587612 | orchestrator | 2026-03-19 00:50:07.587616 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-19 00:50:07.587619 | orchestrator | 2026-03-19 00:50:07.587623 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-19 00:50:07.587627 | orchestrator | Thursday 19 March 2026 00:48:18 +0000 (0:00:00.441) 0:02:38.192 ******** 2026-03-19 00:50:07.587630 | orchestrator | ok: [testbed-manager] 2026-03-19 00:50:07.587649 | orchestrator | 2026-03-19 00:50:07.587662 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-19 00:50:07.587668 | orchestrator | Thursday 19 March 2026 00:48:18 +0000 (0:00:00.123) 0:02:38.316 ******** 2026-03-19 00:50:07.587675 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 00:50:07.587682 | orchestrator | 2026-03-19 00:50:07.587688 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-19 00:50:07.587694 | orchestrator | Thursday 19 March 2026 00:48:19 +0000 (0:00:00.205) 0:02:38.521 ******** 2026-03-19 00:50:07.587700 | orchestrator | ok: [testbed-manager] 2026-03-19 00:50:07.587706 | orchestrator | 2026-03-19 00:50:07.587712 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-19 00:50:07.587718 | orchestrator | Thursday 19 March 2026 00:48:20 +0000 (0:00:01.099) 0:02:39.620 ******** 2026-03-19 00:50:07.587724 | orchestrator | ok: [testbed-manager] 2026-03-19 00:50:07.587730 | orchestrator | 2026-03-19 00:50:07.587734 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-19 00:50:07.587738 | orchestrator | Thursday 19 March 2026 00:48:21 +0000 (0:00:01.347) 0:02:40.967 ******** 2026-03-19 00:50:07.587742 | orchestrator | changed: [testbed-manager] 2026-03-19 00:50:07.587746 | orchestrator | 2026-03-19 00:50:07.587750 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-19 00:50:07.587753 | orchestrator | Thursday 19 March 2026 00:48:22 +0000 (0:00:00.817) 0:02:41.785 ******** 2026-03-19 00:50:07.587757 | orchestrator | ok: [testbed-manager] 2026-03-19 00:50:07.587761 | orchestrator | 2026-03-19 00:50:07.587772 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-19 00:50:07.587781 | orchestrator | Thursday 19 March 2026 00:48:22 +0000 (0:00:00.438) 0:02:42.224 ******** 2026-03-19 00:50:07.587785 | orchestrator | changed: [testbed-manager] 2026-03-19 00:50:07.587789 | orchestrator | 2026-03-19 00:50:07.587792 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-19 00:50:07.587796 | orchestrator | Thursday 19 March 2026 00:48:30 +0000 (0:00:07.309) 0:02:49.534 ******** 2026-03-19 00:50:07.587800 | orchestrator | changed: [testbed-manager] 2026-03-19 00:50:07.587804 | orchestrator | 2026-03-19 00:50:07.587807 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-19 00:50:07.587811 | orchestrator | Thursday 19 March 2026 00:48:42 +0000 (0:00:12.259) 0:03:01.793 ******** 2026-03-19 00:50:07.587815 | orchestrator | ok: [testbed-manager] 2026-03-19 00:50:07.587819 | orchestrator | 2026-03-19 00:50:07.587823 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-19 00:50:07.587826 | orchestrator | 2026-03-19 00:50:07.587830 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-19 00:50:07.587834 | orchestrator | Thursday 19 March 2026 00:48:42 +0000 (0:00:00.524) 0:03:02.318 ******** 2026-03-19 00:50:07.587837 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.587841 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.587845 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.587849 | orchestrator | 2026-03-19 00:50:07.587853 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-19 00:50:07.587856 | orchestrator | Thursday 19 March 2026 00:48:43 +0000 (0:00:00.441) 0:03:02.759 ******** 2026-03-19 00:50:07.587860 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.587864 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.587867 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.587871 | orchestrator | 2026-03-19 00:50:07.587875 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-19 00:50:07.587879 | orchestrator | Thursday 19 March 2026 00:48:43 +0000 (0:00:00.342) 0:03:03.102 ******** 2026-03-19 00:50:07.587883 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:50:07.587887 | orchestrator | 2026-03-19 00:50:07.587891 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-19 00:50:07.587895 | orchestrator | Thursday 19 March 2026 00:48:44 +0000 (0:00:00.470) 0:03:03.573 ******** 2026-03-19 00:50:07.587904 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-19 00:50:07.587907 | orchestrator | 2026-03-19 00:50:07.587911 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-19 00:50:07.587915 | orchestrator | Thursday 19 March 2026 00:48:44 +0000 (0:00:00.668) 0:03:04.241 ******** 2026-03-19 00:50:07.587919 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 00:50:07.587923 | orchestrator | 2026-03-19 00:50:07.587926 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-19 00:50:07.587930 | orchestrator | Thursday 19 March 2026 00:48:45 +0000 (0:00:00.883) 0:03:05.125 ******** 2026-03-19 00:50:07.587934 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.587937 | orchestrator | 2026-03-19 00:50:07.587941 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-19 00:50:07.587945 | orchestrator | Thursday 19 March 2026 00:48:46 +0000 (0:00:00.275) 0:03:05.401 ******** 2026-03-19 00:50:07.587949 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 00:50:07.587952 | orchestrator | 2026-03-19 00:50:07.587956 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-19 00:50:07.587960 | orchestrator | Thursday 19 March 2026 00:48:47 +0000 (0:00:01.089) 0:03:06.491 ******** 2026-03-19 00:50:07.587963 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.587967 | orchestrator | 2026-03-19 00:50:07.587971 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-19 00:50:07.587975 | orchestrator | Thursday 19 March 2026 00:48:47 +0000 (0:00:00.087) 0:03:06.578 ******** 2026-03-19 00:50:07.587982 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.587986 | orchestrator | 2026-03-19 00:50:07.587990 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-19 00:50:07.587993 | orchestrator | Thursday 19 March 2026 00:48:47 +0000 (0:00:00.091) 0:03:06.669 ******** 2026-03-19 00:50:07.587997 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.588001 | orchestrator | 2026-03-19 00:50:07.588005 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-19 00:50:07.588008 | orchestrator | Thursday 19 March 2026 00:48:47 +0000 (0:00:00.088) 0:03:06.758 ******** 2026-03-19 00:50:07.588012 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.588016 | orchestrator | 2026-03-19 00:50:07.588020 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-19 00:50:07.588023 | orchestrator | Thursday 19 March 2026 00:48:47 +0000 (0:00:00.306) 0:03:07.065 ******** 2026-03-19 00:50:07.588027 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-19 00:50:07.588031 | orchestrator | 2026-03-19 00:50:07.588035 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-19 00:50:07.588038 | orchestrator | Thursday 19 March 2026 00:48:53 +0000 (0:00:05.421) 0:03:12.486 ******** 2026-03-19 00:50:07.588042 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-19 00:50:07.588046 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-19 00:50:07.588051 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-19 00:50:07.588055 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-19 00:50:07.588058 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-19 00:50:07.588062 | orchestrator | 2026-03-19 00:50:07.588066 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-19 00:50:07.588070 | orchestrator | Thursday 19 March 2026 00:49:35 +0000 (0:00:41.912) 0:03:54.398 ******** 2026-03-19 00:50:07.588076 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 00:50:07.588081 | orchestrator | 2026-03-19 00:50:07.588084 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-19 00:50:07.588090 | orchestrator | Thursday 19 March 2026 00:49:36 +0000 (0:00:01.199) 0:03:55.598 ******** 2026-03-19 00:50:07.588097 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-19 00:50:07.588102 | orchestrator | 2026-03-19 00:50:07.588111 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-19 00:50:07.588119 | orchestrator | Thursday 19 March 2026 00:49:37 +0000 (0:00:01.492) 0:03:57.091 ******** 2026-03-19 00:50:07.588124 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-19 00:50:07.588130 | orchestrator | 2026-03-19 00:50:07.588136 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-19 00:50:07.588142 | orchestrator | Thursday 19 March 2026 00:49:39 +0000 (0:00:01.307) 0:03:58.398 ******** 2026-03-19 00:50:07.588148 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.588152 | orchestrator | 2026-03-19 00:50:07.588155 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-19 00:50:07.588170 | orchestrator | Thursday 19 March 2026 00:49:39 +0000 (0:00:00.154) 0:03:58.553 ******** 2026-03-19 00:50:07.588173 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-19 00:50:07.588178 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-19 00:50:07.588181 | orchestrator | 2026-03-19 00:50:07.588185 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-19 00:50:07.588189 | orchestrator | Thursday 19 March 2026 00:49:42 +0000 (0:00:03.313) 0:04:01.866 ******** 2026-03-19 00:50:07.588193 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.588197 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.588222 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.588236 | orchestrator | 2026-03-19 00:50:07.588242 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-19 00:50:07.588247 | orchestrator | Thursday 19 March 2026 00:49:42 +0000 (0:00:00.357) 0:04:02.224 ******** 2026-03-19 00:50:07.588251 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.588255 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.588259 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.588262 | orchestrator | 2026-03-19 00:50:07.588266 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-19 00:50:07.588270 | orchestrator | 2026-03-19 00:50:07.588278 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-19 00:50:07.588281 | orchestrator | Thursday 19 March 2026 00:49:43 +0000 (0:00:00.950) 0:04:03.175 ******** 2026-03-19 00:50:07.588292 | orchestrator | ok: [testbed-manager] 2026-03-19 00:50:07.588298 | orchestrator | 2026-03-19 00:50:07.588304 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-19 00:50:07.588310 | orchestrator | Thursday 19 March 2026 00:49:44 +0000 (0:00:00.181) 0:04:03.356 ******** 2026-03-19 00:50:07.588316 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 00:50:07.588321 | orchestrator | 2026-03-19 00:50:07.588327 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-19 00:50:07.588333 | orchestrator | Thursday 19 March 2026 00:49:44 +0000 (0:00:00.437) 0:04:03.794 ******** 2026-03-19 00:50:07.588339 | orchestrator | changed: [testbed-manager] 2026-03-19 00:50:07.588345 | orchestrator | 2026-03-19 00:50:07.588350 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-19 00:50:07.588357 | orchestrator | 2026-03-19 00:50:07.588363 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-19 00:50:07.588368 | orchestrator | Thursday 19 March 2026 00:49:50 +0000 (0:00:06.127) 0:04:09.922 ******** 2026-03-19 00:50:07.588374 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:50:07.588381 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:50:07.588387 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:50:07.588392 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:07.588398 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:07.588404 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:07.588409 | orchestrator | 2026-03-19 00:50:07.588415 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-19 00:50:07.588421 | orchestrator | Thursday 19 March 2026 00:49:51 +0000 (0:00:00.528) 0:04:10.450 ******** 2026-03-19 00:50:07.588427 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-19 00:50:07.588433 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-19 00:50:07.588439 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-19 00:50:07.588445 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-19 00:50:07.588452 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-19 00:50:07.588458 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-19 00:50:07.588464 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-19 00:50:07.588470 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-19 00:50:07.588476 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-19 00:50:07.588485 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-19 00:50:07.588488 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-19 00:50:07.588492 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-19 00:50:07.588502 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-19 00:50:07.588513 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-19 00:50:07.588518 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-19 00:50:07.588524 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-19 00:50:07.588530 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-19 00:50:07.588537 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-19 00:50:07.588543 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-19 00:50:07.588549 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-19 00:50:07.588556 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-19 00:50:07.588563 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-19 00:50:07.588569 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-19 00:50:07.588575 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-19 00:50:07.588581 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-19 00:50:07.588587 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-19 00:50:07.588593 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-19 00:50:07.588599 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-19 00:50:07.588603 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-19 00:50:07.588607 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-19 00:50:07.588611 | orchestrator | 2026-03-19 00:50:07.588615 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-19 00:50:07.588619 | orchestrator | Thursday 19 March 2026 00:50:04 +0000 (0:00:13.876) 0:04:24.327 ******** 2026-03-19 00:50:07.588623 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.588627 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.588630 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.588635 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.588639 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.588643 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.588649 | orchestrator | 2026-03-19 00:50:07.588657 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-19 00:50:07.588667 | orchestrator | Thursday 19 March 2026 00:50:05 +0000 (0:00:00.473) 0:04:24.801 ******** 2026-03-19 00:50:07.588673 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:50:07.588679 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:50:07.588684 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:50:07.588691 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:07.588697 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:07.588703 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:07.588709 | orchestrator | 2026-03-19 00:50:07.588715 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:50:07.588721 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:50:07.588731 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-19 00:50:07.588737 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-19 00:50:07.588752 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-19 00:50:07.588762 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-19 00:50:07.588768 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-19 00:50:07.588774 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-19 00:50:07.588780 | orchestrator | 2026-03-19 00:50:07.588787 | orchestrator | 2026-03-19 00:50:07.588794 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:50:07.588800 | orchestrator | Thursday 19 March 2026 00:50:05 +0000 (0:00:00.477) 0:04:25.278 ******** 2026-03-19 00:50:07.588806 | orchestrator | =============================================================================== 2026-03-19 00:50:07.588813 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.48s 2026-03-19 00:50:07.588820 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 41.91s 2026-03-19 00:50:07.588827 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.59s 2026-03-19 00:50:07.589327 | orchestrator | Manage labels ---------------------------------------------------------- 13.88s 2026-03-19 00:50:07.589357 | orchestrator | kubectl : Install required packages ------------------------------------ 12.26s 2026-03-19 00:50:07.589361 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.77s 2026-03-19 00:50:07.589365 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.31s 2026-03-19 00:50:07.589369 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.53s 2026-03-19 00:50:07.589373 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.13s 2026-03-19 00:50:07.589377 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.42s 2026-03-19 00:50:07.589384 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.47s 2026-03-19 00:50:07.589389 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 3.31s 2026-03-19 00:50:07.589393 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.17s 2026-03-19 00:50:07.589397 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.52s 2026-03-19 00:50:07.589400 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.50s 2026-03-19 00:50:07.589404 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.38s 2026-03-19 00:50:07.589408 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.31s 2026-03-19 00:50:07.589412 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.83s 2026-03-19 00:50:07.589416 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.75s 2026-03-19 00:50:07.589419 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 1.74s 2026-03-19 00:50:07.589423 | orchestrator | 2026-03-19 00:50:07 | INFO  | Task c2a7409a-023a-4148-8f01-fdb5930939f8 is in state STARTED 2026-03-19 00:50:07.589428 | orchestrator | 2026-03-19 00:50:07 | INFO  | Task c15efca3-adb2-4460-b1de-3cbcc93aa49a is in state STARTED 2026-03-19 00:50:07.589431 | orchestrator | 2026-03-19 00:50:07 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:07.589581 | orchestrator | 2026-03-19 00:50:07 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:07.589592 | orchestrator | 2026-03-19 00:50:07 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:07.589608 | orchestrator | 2026-03-19 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:10.622257 | orchestrator | 2026-03-19 00:50:10 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:50:10.624482 | orchestrator | 2026-03-19 00:50:10 | INFO  | Task c2a7409a-023a-4148-8f01-fdb5930939f8 is in state STARTED 2026-03-19 00:50:10.624747 | orchestrator | 2026-03-19 00:50:10 | INFO  | Task c15efca3-adb2-4460-b1de-3cbcc93aa49a is in state STARTED 2026-03-19 00:50:10.625384 | orchestrator | 2026-03-19 00:50:10 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:10.628184 | orchestrator | 2026-03-19 00:50:10 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:10.628665 | orchestrator | 2026-03-19 00:50:10 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:10.628693 | orchestrator | 2026-03-19 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:13.671448 | orchestrator | 2026-03-19 00:50:13 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:50:13.671549 | orchestrator | 2026-03-19 00:50:13 | INFO  | Task c2a7409a-023a-4148-8f01-fdb5930939f8 is in state SUCCESS 2026-03-19 00:50:13.673471 | orchestrator | 2026-03-19 00:50:13 | INFO  | Task c15efca3-adb2-4460-b1de-3cbcc93aa49a is in state STARTED 2026-03-19 00:50:13.674236 | orchestrator | 2026-03-19 00:50:13 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:13.674899 | orchestrator | 2026-03-19 00:50:13 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:13.676572 | orchestrator | 2026-03-19 00:50:13 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:13.676615 | orchestrator | 2026-03-19 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:16.703536 | orchestrator | 2026-03-19 00:50:16 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:50:16.705445 | orchestrator | 2026-03-19 00:50:16 | INFO  | Task c15efca3-adb2-4460-b1de-3cbcc93aa49a is in state SUCCESS 2026-03-19 00:50:16.706174 | orchestrator | 2026-03-19 00:50:16 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:16.707921 | orchestrator | 2026-03-19 00:50:16 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:16.710851 | orchestrator | 2026-03-19 00:50:16 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:16.710900 | orchestrator | 2026-03-19 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:19.746820 | orchestrator | 2026-03-19 00:50:19 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:50:19.749231 | orchestrator | 2026-03-19 00:50:19 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:19.751178 | orchestrator | 2026-03-19 00:50:19 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:19.753219 | orchestrator | 2026-03-19 00:50:19 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:19.753294 | orchestrator | 2026-03-19 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:22.789179 | orchestrator | 2026-03-19 00:50:22 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state STARTED 2026-03-19 00:50:22.789410 | orchestrator | 2026-03-19 00:50:22 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:22.791037 | orchestrator | 2026-03-19 00:50:22 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:22.793589 | orchestrator | 2026-03-19 00:50:22 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:22.793633 | orchestrator | 2026-03-19 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:25.823362 | orchestrator | 2026-03-19 00:50:25 | INFO  | Task e350f6fb-1f57-4de5-bb2e-45eca61fb49a is in state SUCCESS 2026-03-19 00:50:25.826442 | orchestrator | 2026-03-19 00:50:25.826523 | orchestrator | 2026-03-19 00:50:25.826530 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-19 00:50:25.826535 | orchestrator | 2026-03-19 00:50:25.826539 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-19 00:50:25.826544 | orchestrator | Thursday 19 March 2026 00:50:08 +0000 (0:00:00.199) 0:00:00.199 ******** 2026-03-19 00:50:25.826548 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-19 00:50:25.826553 | orchestrator | 2026-03-19 00:50:25.826557 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-19 00:50:25.826561 | orchestrator | Thursday 19 March 2026 00:50:09 +0000 (0:00:00.931) 0:00:01.131 ******** 2026-03-19 00:50:25.826566 | orchestrator | changed: [testbed-manager] 2026-03-19 00:50:25.826570 | orchestrator | 2026-03-19 00:50:25.826574 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-19 00:50:25.826577 | orchestrator | Thursday 19 March 2026 00:50:11 +0000 (0:00:01.406) 0:00:02.538 ******** 2026-03-19 00:50:25.826581 | orchestrator | changed: [testbed-manager] 2026-03-19 00:50:25.826585 | orchestrator | 2026-03-19 00:50:25.826589 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:50:25.826593 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:50:25.826598 | orchestrator | 2026-03-19 00:50:25.826602 | orchestrator | 2026-03-19 00:50:25.826606 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:50:25.826610 | orchestrator | Thursday 19 March 2026 00:50:11 +0000 (0:00:00.368) 0:00:02.906 ******** 2026-03-19 00:50:25.826613 | orchestrator | =============================================================================== 2026-03-19 00:50:25.826617 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.41s 2026-03-19 00:50:25.826623 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.93s 2026-03-19 00:50:25.826630 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.37s 2026-03-19 00:50:25.826636 | orchestrator | 2026-03-19 00:50:25.826641 | orchestrator | 2026-03-19 00:50:25.826646 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-19 00:50:25.826656 | orchestrator | 2026-03-19 00:50:25.826663 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-19 00:50:25.826671 | orchestrator | Thursday 19 March 2026 00:50:09 +0000 (0:00:00.223) 0:00:00.223 ******** 2026-03-19 00:50:25.826677 | orchestrator | ok: [testbed-manager] 2026-03-19 00:50:25.826683 | orchestrator | 2026-03-19 00:50:25.826689 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-19 00:50:25.826695 | orchestrator | Thursday 19 March 2026 00:50:09 +0000 (0:00:00.672) 0:00:00.895 ******** 2026-03-19 00:50:25.826702 | orchestrator | ok: [testbed-manager] 2026-03-19 00:50:25.826708 | orchestrator | 2026-03-19 00:50:25.826713 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-19 00:50:25.826719 | orchestrator | Thursday 19 March 2026 00:50:10 +0000 (0:00:00.483) 0:00:01.379 ******** 2026-03-19 00:50:25.826725 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-19 00:50:25.826731 | orchestrator | 2026-03-19 00:50:25.826737 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-19 00:50:25.826742 | orchestrator | Thursday 19 March 2026 00:50:11 +0000 (0:00:00.893) 0:00:02.273 ******** 2026-03-19 00:50:25.826791 | orchestrator | changed: [testbed-manager] 2026-03-19 00:50:25.826804 | orchestrator | 2026-03-19 00:50:25.826811 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-19 00:50:25.826817 | orchestrator | Thursday 19 March 2026 00:50:12 +0000 (0:00:00.998) 0:00:03.271 ******** 2026-03-19 00:50:25.826823 | orchestrator | changed: [testbed-manager] 2026-03-19 00:50:25.826828 | orchestrator | 2026-03-19 00:50:25.826834 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-19 00:50:25.826840 | orchestrator | Thursday 19 March 2026 00:50:12 +0000 (0:00:00.413) 0:00:03.684 ******** 2026-03-19 00:50:25.826846 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-19 00:50:25.826852 | orchestrator | 2026-03-19 00:50:25.826859 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-19 00:50:25.826863 | orchestrator | Thursday 19 March 2026 00:50:14 +0000 (0:00:01.367) 0:00:05.052 ******** 2026-03-19 00:50:25.826869 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-19 00:50:25.826875 | orchestrator | 2026-03-19 00:50:25.826902 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-19 00:50:25.826909 | orchestrator | Thursday 19 March 2026 00:50:14 +0000 (0:00:00.704) 0:00:05.757 ******** 2026-03-19 00:50:25.826938 | orchestrator | ok: [testbed-manager] 2026-03-19 00:50:25.826945 | orchestrator | 2026-03-19 00:50:25.826951 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-19 00:50:25.826958 | orchestrator | Thursday 19 March 2026 00:50:15 +0000 (0:00:00.343) 0:00:06.100 ******** 2026-03-19 00:50:25.826962 | orchestrator | ok: [testbed-manager] 2026-03-19 00:50:25.826967 | orchestrator | 2026-03-19 00:50:25.826974 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:50:25.826980 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:50:25.826990 | orchestrator | 2026-03-19 00:50:25.826997 | orchestrator | 2026-03-19 00:50:25.827004 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:50:25.827009 | orchestrator | Thursday 19 March 2026 00:50:15 +0000 (0:00:00.278) 0:00:06.379 ******** 2026-03-19 00:50:25.827016 | orchestrator | =============================================================================== 2026-03-19 00:50:25.827022 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.37s 2026-03-19 00:50:25.827028 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.00s 2026-03-19 00:50:25.827034 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.89s 2026-03-19 00:50:25.827059 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.70s 2026-03-19 00:50:25.827066 | orchestrator | Get home directory of operator user ------------------------------------- 0.67s 2026-03-19 00:50:25.827072 | orchestrator | Create .kube directory -------------------------------------------------- 0.48s 2026-03-19 00:50:25.827077 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.41s 2026-03-19 00:50:25.827083 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.34s 2026-03-19 00:50:25.827090 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.28s 2026-03-19 00:50:25.827096 | orchestrator | 2026-03-19 00:50:25.827102 | orchestrator | 2026-03-19 00:50:25.827108 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-19 00:50:25.827114 | orchestrator | 2026-03-19 00:50:25.827121 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-19 00:50:25.827128 | orchestrator | Thursday 19 March 2026 00:48:09 +0000 (0:00:00.110) 0:00:00.110 ******** 2026-03-19 00:50:25.827137 | orchestrator | ok: [localhost] => { 2026-03-19 00:50:25.827144 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-19 00:50:25.827159 | orchestrator | } 2026-03-19 00:50:25.827166 | orchestrator | 2026-03-19 00:50:25.827172 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-19 00:50:25.827179 | orchestrator | Thursday 19 March 2026 00:48:09 +0000 (0:00:00.043) 0:00:00.154 ******** 2026-03-19 00:50:25.827204 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-19 00:50:25.827212 | orchestrator | ...ignoring 2026-03-19 00:50:25.827219 | orchestrator | 2026-03-19 00:50:25.827226 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-19 00:50:25.827231 | orchestrator | Thursday 19 March 2026 00:48:13 +0000 (0:00:03.381) 0:00:03.535 ******** 2026-03-19 00:50:25.827238 | orchestrator | skipping: [localhost] 2026-03-19 00:50:25.827244 | orchestrator | 2026-03-19 00:50:25.827250 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-19 00:50:25.827256 | orchestrator | Thursday 19 March 2026 00:48:13 +0000 (0:00:00.104) 0:00:03.639 ******** 2026-03-19 00:50:25.827262 | orchestrator | ok: [localhost] 2026-03-19 00:50:25.827268 | orchestrator | 2026-03-19 00:50:25.827274 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 00:50:25.827281 | orchestrator | 2026-03-19 00:50:25.827287 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 00:50:25.827293 | orchestrator | Thursday 19 March 2026 00:48:13 +0000 (0:00:00.573) 0:00:04.212 ******** 2026-03-19 00:50:25.827299 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:25.827306 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:25.827312 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:25.827319 | orchestrator | 2026-03-19 00:50:25.827325 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 00:50:25.827331 | orchestrator | Thursday 19 March 2026 00:48:14 +0000 (0:00:00.672) 0:00:04.885 ******** 2026-03-19 00:50:25.827337 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-19 00:50:25.827345 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-19 00:50:25.827351 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-19 00:50:25.827357 | orchestrator | 2026-03-19 00:50:25.827363 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-19 00:50:25.827370 | orchestrator | 2026-03-19 00:50:25.827376 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-19 00:50:25.827382 | orchestrator | Thursday 19 March 2026 00:48:15 +0000 (0:00:01.159) 0:00:06.045 ******** 2026-03-19 00:50:25.827389 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:50:25.827396 | orchestrator | 2026-03-19 00:50:25.827402 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-19 00:50:25.827407 | orchestrator | Thursday 19 March 2026 00:48:16 +0000 (0:00:00.656) 0:00:06.702 ******** 2026-03-19 00:50:25.827413 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:25.827419 | orchestrator | 2026-03-19 00:50:25.827424 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-19 00:50:25.827430 | orchestrator | Thursday 19 March 2026 00:48:18 +0000 (0:00:02.206) 0:00:08.908 ******** 2026-03-19 00:50:25.827436 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:25.827442 | orchestrator | 2026-03-19 00:50:25.827447 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-19 00:50:25.827459 | orchestrator | Thursday 19 March 2026 00:48:19 +0000 (0:00:00.498) 0:00:09.406 ******** 2026-03-19 00:50:25.827464 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:25.827470 | orchestrator | 2026-03-19 00:50:25.827475 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-19 00:50:25.827481 | orchestrator | Thursday 19 March 2026 00:48:19 +0000 (0:00:00.394) 0:00:09.801 ******** 2026-03-19 00:50:25.827486 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:25.827492 | orchestrator | 2026-03-19 00:50:25.827498 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-19 00:50:25.827511 | orchestrator | Thursday 19 March 2026 00:48:19 +0000 (0:00:00.457) 0:00:10.259 ******** 2026-03-19 00:50:25.827517 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:25.827523 | orchestrator | 2026-03-19 00:50:25.827529 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-19 00:50:25.827535 | orchestrator | Thursday 19 March 2026 00:48:20 +0000 (0:00:00.391) 0:00:10.650 ******** 2026-03-19 00:50:25.827541 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:50:25.827547 | orchestrator | 2026-03-19 00:50:25.827553 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-19 00:50:25.827567 | orchestrator | Thursday 19 March 2026 00:48:20 +0000 (0:00:00.613) 0:00:11.263 ******** 2026-03-19 00:50:25.827574 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:25.827581 | orchestrator | 2026-03-19 00:50:25.827587 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-19 00:50:25.827593 | orchestrator | Thursday 19 March 2026 00:48:21 +0000 (0:00:00.827) 0:00:12.090 ******** 2026-03-19 00:50:25.827600 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:25.827606 | orchestrator | 2026-03-19 00:50:25.827612 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-19 00:50:25.827618 | orchestrator | Thursday 19 March 2026 00:48:22 +0000 (0:00:00.751) 0:00:12.842 ******** 2026-03-19 00:50:25.827624 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:25.827631 | orchestrator | 2026-03-19 00:50:25.827637 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-19 00:50:25.827643 | orchestrator | Thursday 19 March 2026 00:48:23 +0000 (0:00:00.603) 0:00:13.445 ******** 2026-03-19 00:50:25.827654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 00:50:25.827665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 00:50:25.827679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 00:50:25.827694 | orchestrator | 2026-03-19 00:50:25.827702 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-19 00:50:25.827708 | orchestrator | Thursday 19 March 2026 00:48:26 +0000 (0:00:02.969) 0:00:16.415 ******** 2026-03-19 00:50:25.827720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 00:50:25.827728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 00:50:25.827735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 00:50:25.827747 | orchestrator | 2026-03-19 00:50:25.827753 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-19 00:50:25.827770 | orchestrator | Thursday 19 March 2026 00:48:27 +0000 (0:00:01.731) 0:00:18.146 ******** 2026-03-19 00:50:25.827774 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-19 00:50:25.827779 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-19 00:50:25.827783 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-19 00:50:25.827788 | orchestrator | 2026-03-19 00:50:25.827794 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-19 00:50:25.827800 | orchestrator | Thursday 19 March 2026 00:48:29 +0000 (0:00:01.857) 0:00:20.004 ******** 2026-03-19 00:50:25.827809 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-19 00:50:25.827817 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-19 00:50:25.827822 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-19 00:50:25.827828 | orchestrator | 2026-03-19 00:50:25.827833 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-19 00:50:25.827844 | orchestrator | Thursday 19 March 2026 00:48:31 +0000 (0:00:02.023) 0:00:22.028 ******** 2026-03-19 00:50:25.827850 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-19 00:50:25.827856 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-19 00:50:25.827862 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-19 00:50:25.827867 | orchestrator | 2026-03-19 00:50:25.827873 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-19 00:50:25.827878 | orchestrator | Thursday 19 March 2026 00:48:32 +0000 (0:00:01.301) 0:00:23.330 ******** 2026-03-19 00:50:25.827885 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-19 00:50:25.827892 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-19 00:50:25.827896 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-19 00:50:25.827900 | orchestrator | 2026-03-19 00:50:25.827904 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-19 00:50:25.827907 | orchestrator | Thursday 19 March 2026 00:48:34 +0000 (0:00:01.909) 0:00:25.240 ******** 2026-03-19 00:50:25.827911 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-19 00:50:25.827915 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-19 00:50:25.827919 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-19 00:50:25.827922 | orchestrator | 2026-03-19 00:50:25.827926 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-19 00:50:25.827930 | orchestrator | Thursday 19 March 2026 00:48:36 +0000 (0:00:01.660) 0:00:26.901 ******** 2026-03-19 00:50:25.827934 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-19 00:50:25.827937 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-19 00:50:25.827946 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-19 00:50:25.827950 | orchestrator | 2026-03-19 00:50:25.827954 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-19 00:50:25.827957 | orchestrator | Thursday 19 March 2026 00:48:39 +0000 (0:00:02.450) 0:00:29.351 ******** 2026-03-19 00:50:25.827961 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:25.827965 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:25.827969 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:25.827973 | orchestrator | 2026-03-19 00:50:25.827976 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-19 00:50:25.827980 | orchestrator | Thursday 19 March 2026 00:48:39 +0000 (0:00:00.921) 0:00:30.273 ******** 2026-03-19 00:50:25.827989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 00:50:25.827996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 00:50:25.828001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 00:50:25.828009 | orchestrator | 2026-03-19 00:50:25.828013 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-19 00:50:25.828016 | orchestrator | Thursday 19 March 2026 00:48:41 +0000 (0:00:01.174) 0:00:31.447 ******** 2026-03-19 00:50:25.828020 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:25.828024 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:25.828028 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:25.828032 | orchestrator | 2026-03-19 00:50:25.828036 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-19 00:50:25.828040 | orchestrator | Thursday 19 March 2026 00:48:42 +0000 (0:00:01.036) 0:00:32.484 ******** 2026-03-19 00:50:25.828043 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:25.828047 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:25.828051 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:25.828055 | orchestrator | 2026-03-19 00:50:25.828059 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-19 00:50:25.828063 | orchestrator | Thursday 19 March 2026 00:48:49 +0000 (0:00:07.395) 0:00:39.879 ******** 2026-03-19 00:50:25.828066 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:25.828070 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:25.828074 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:25.828079 | orchestrator | 2026-03-19 00:50:25.828085 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-19 00:50:25.828094 | orchestrator | 2026-03-19 00:50:25.828100 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-19 00:50:25.828106 | orchestrator | Thursday 19 March 2026 00:48:49 +0000 (0:00:00.354) 0:00:40.234 ******** 2026-03-19 00:50:25.828112 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:25.828118 | orchestrator | 2026-03-19 00:50:25.828124 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-19 00:50:25.828129 | orchestrator | Thursday 19 March 2026 00:48:50 +0000 (0:00:00.698) 0:00:40.932 ******** 2026-03-19 00:50:25.828134 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:50:25.828140 | orchestrator | 2026-03-19 00:50:25.828146 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-19 00:50:25.828152 | orchestrator | Thursday 19 March 2026 00:48:50 +0000 (0:00:00.198) 0:00:41.130 ******** 2026-03-19 00:50:25.828158 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:25.828163 | orchestrator | 2026-03-19 00:50:25.828169 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-19 00:50:25.828175 | orchestrator | Thursday 19 March 2026 00:48:52 +0000 (0:00:01.665) 0:00:42.796 ******** 2026-03-19 00:50:25.828181 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:50:25.828228 | orchestrator | 2026-03-19 00:50:25.828234 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-19 00:50:25.828239 | orchestrator | 2026-03-19 00:50:25.828245 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-19 00:50:25.828252 | orchestrator | Thursday 19 March 2026 00:49:47 +0000 (0:00:55.364) 0:01:38.161 ******** 2026-03-19 00:50:25.828260 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:25.828264 | orchestrator | 2026-03-19 00:50:25.828268 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-19 00:50:25.828272 | orchestrator | Thursday 19 March 2026 00:49:48 +0000 (0:00:00.562) 0:01:38.724 ******** 2026-03-19 00:50:25.828276 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:50:25.828282 | orchestrator | 2026-03-19 00:50:25.828288 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-19 00:50:25.828294 | orchestrator | Thursday 19 March 2026 00:49:48 +0000 (0:00:00.199) 0:01:38.923 ******** 2026-03-19 00:50:25.828300 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:25.828309 | orchestrator | 2026-03-19 00:50:25.828317 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-19 00:50:25.828322 | orchestrator | Thursday 19 March 2026 00:49:50 +0000 (0:00:01.697) 0:01:40.621 ******** 2026-03-19 00:50:25.828328 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:50:25.828341 | orchestrator | 2026-03-19 00:50:25.828347 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-19 00:50:25.828353 | orchestrator | 2026-03-19 00:50:25.828359 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-19 00:50:25.828365 | orchestrator | Thursday 19 March 2026 00:50:04 +0000 (0:00:14.089) 0:01:54.710 ******** 2026-03-19 00:50:25.828370 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:25.828376 | orchestrator | 2026-03-19 00:50:25.828387 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-19 00:50:25.828393 | orchestrator | Thursday 19 March 2026 00:50:05 +0000 (0:00:00.849) 0:01:55.560 ******** 2026-03-19 00:50:25.828399 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:50:25.828405 | orchestrator | 2026-03-19 00:50:25.828411 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-19 00:50:25.828417 | orchestrator | Thursday 19 March 2026 00:50:05 +0000 (0:00:00.463) 0:01:56.024 ******** 2026-03-19 00:50:25.828422 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:25.828428 | orchestrator | 2026-03-19 00:50:25.828434 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-19 00:50:25.828440 | orchestrator | Thursday 19 March 2026 00:50:12 +0000 (0:00:07.045) 0:02:03.069 ******** 2026-03-19 00:50:25.828446 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:50:25.828453 | orchestrator | 2026-03-19 00:50:25.828459 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-19 00:50:25.828464 | orchestrator | 2026-03-19 00:50:25.828471 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-19 00:50:25.828477 | orchestrator | Thursday 19 March 2026 00:50:22 +0000 (0:00:09.815) 0:02:12.885 ******** 2026-03-19 00:50:25.828484 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:50:25.828490 | orchestrator | 2026-03-19 00:50:25.828496 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-19 00:50:25.828502 | orchestrator | Thursday 19 March 2026 00:50:23 +0000 (0:00:00.525) 0:02:13.410 ******** 2026-03-19 00:50:25.828508 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:50:25.828515 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:50:25.828521 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:50:25.828527 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-19 00:50:25.828533 | orchestrator | enable_outward_rabbitmq_True 2026-03-19 00:50:25.828539 | orchestrator | 2026-03-19 00:50:25.828546 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-19 00:50:25.828550 | orchestrator | skipping: no hosts matched 2026-03-19 00:50:25.828553 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-19 00:50:25.828557 | orchestrator | outward_rabbitmq_restart 2026-03-19 00:50:25.828561 | orchestrator | 2026-03-19 00:50:25.828565 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-19 00:50:25.828579 | orchestrator | skipping: no hosts matched 2026-03-19 00:50:25.828583 | orchestrator | 2026-03-19 00:50:25.828586 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-19 00:50:25.828590 | orchestrator | skipping: no hosts matched 2026-03-19 00:50:25.828594 | orchestrator | 2026-03-19 00:50:25.828598 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:50:25.828602 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-19 00:50:25.828606 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-19 00:50:25.828610 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:50:25.828614 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 00:50:25.828624 | orchestrator | 2026-03-19 00:50:25.828628 | orchestrator | 2026-03-19 00:50:25.828632 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:50:25.828635 | orchestrator | Thursday 19 March 2026 00:50:25 +0000 (0:00:02.392) 0:02:15.802 ******** 2026-03-19 00:50:25.828639 | orchestrator | =============================================================================== 2026-03-19 00:50:25.828644 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.27s 2026-03-19 00:50:25.828649 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.41s 2026-03-19 00:50:25.828655 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.40s 2026-03-19 00:50:25.828661 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.38s 2026-03-19 00:50:25.828670 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.97s 2026-03-19 00:50:25.828684 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.46s 2026-03-19 00:50:25.828691 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.39s 2026-03-19 00:50:25.828697 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.21s 2026-03-19 00:50:25.828702 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.11s 2026-03-19 00:50:25.828709 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.02s 2026-03-19 00:50:25.828714 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.91s 2026-03-19 00:50:25.828720 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.86s 2026-03-19 00:50:25.828725 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.73s 2026-03-19 00:50:25.828731 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.66s 2026-03-19 00:50:25.828736 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.30s 2026-03-19 00:50:25.828741 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.17s 2026-03-19 00:50:25.828748 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.16s 2026-03-19 00:50:25.828758 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.04s 2026-03-19 00:50:25.828765 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.92s 2026-03-19 00:50:25.828770 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.86s 2026-03-19 00:50:25.828778 | orchestrator | 2026-03-19 00:50:25 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:25.830428 | orchestrator | 2026-03-19 00:50:25 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:25.833302 | orchestrator | 2026-03-19 00:50:25 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:25.833377 | orchestrator | 2026-03-19 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:28.877914 | orchestrator | 2026-03-19 00:50:28 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:28.879088 | orchestrator | 2026-03-19 00:50:28 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:28.880592 | orchestrator | 2026-03-19 00:50:28 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:28.880885 | orchestrator | 2026-03-19 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:31.920612 | orchestrator | 2026-03-19 00:50:31 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:31.922345 | orchestrator | 2026-03-19 00:50:31 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:31.923682 | orchestrator | 2026-03-19 00:50:31 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:31.923751 | orchestrator | 2026-03-19 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:34.970327 | orchestrator | 2026-03-19 00:50:34 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:34.972338 | orchestrator | 2026-03-19 00:50:34 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:34.974911 | orchestrator | 2026-03-19 00:50:34 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:34.974981 | orchestrator | 2026-03-19 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:38.013958 | orchestrator | 2026-03-19 00:50:38 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:38.014447 | orchestrator | 2026-03-19 00:50:38 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:38.016464 | orchestrator | 2026-03-19 00:50:38 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:38.016489 | orchestrator | 2026-03-19 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:41.056209 | orchestrator | 2026-03-19 00:50:41 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:41.057577 | orchestrator | 2026-03-19 00:50:41 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:41.058560 | orchestrator | 2026-03-19 00:50:41 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:41.058620 | orchestrator | 2026-03-19 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:44.095679 | orchestrator | 2026-03-19 00:50:44 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:44.096539 | orchestrator | 2026-03-19 00:50:44 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:44.097254 | orchestrator | 2026-03-19 00:50:44 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:44.097301 | orchestrator | 2026-03-19 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:47.131600 | orchestrator | 2026-03-19 00:50:47 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:47.131763 | orchestrator | 2026-03-19 00:50:47 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:47.132611 | orchestrator | 2026-03-19 00:50:47 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:47.132641 | orchestrator | 2026-03-19 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:50.154397 | orchestrator | 2026-03-19 00:50:50 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:50.155666 | orchestrator | 2026-03-19 00:50:50 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:50.156800 | orchestrator | 2026-03-19 00:50:50 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:50.156871 | orchestrator | 2026-03-19 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:53.190043 | orchestrator | 2026-03-19 00:50:53 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:53.192186 | orchestrator | 2026-03-19 00:50:53 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:53.194002 | orchestrator | 2026-03-19 00:50:53 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:53.194233 | orchestrator | 2026-03-19 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:56.244463 | orchestrator | 2026-03-19 00:50:56 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:56.246301 | orchestrator | 2026-03-19 00:50:56 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:56.247649 | orchestrator | 2026-03-19 00:50:56 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:56.247799 | orchestrator | 2026-03-19 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:50:59.290520 | orchestrator | 2026-03-19 00:50:59 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:50:59.291865 | orchestrator | 2026-03-19 00:50:59 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:50:59.293074 | orchestrator | 2026-03-19 00:50:59 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:50:59.293191 | orchestrator | 2026-03-19 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:02.324452 | orchestrator | 2026-03-19 00:51:02 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:02.327678 | orchestrator | 2026-03-19 00:51:02 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:51:02.327955 | orchestrator | 2026-03-19 00:51:02 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:02.327968 | orchestrator | 2026-03-19 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:05.354931 | orchestrator | 2026-03-19 00:51:05 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:05.355865 | orchestrator | 2026-03-19 00:51:05 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:51:05.356514 | orchestrator | 2026-03-19 00:51:05 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:05.356531 | orchestrator | 2026-03-19 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:08.390350 | orchestrator | 2026-03-19 00:51:08 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:08.392076 | orchestrator | 2026-03-19 00:51:08 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:51:08.393835 | orchestrator | 2026-03-19 00:51:08 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:08.393900 | orchestrator | 2026-03-19 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:11.439930 | orchestrator | 2026-03-19 00:51:11 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:11.441207 | orchestrator | 2026-03-19 00:51:11 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:51:11.443716 | orchestrator | 2026-03-19 00:51:11 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:11.443757 | orchestrator | 2026-03-19 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:14.486443 | orchestrator | 2026-03-19 00:51:14 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:14.488021 | orchestrator | 2026-03-19 00:51:14 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state STARTED 2026-03-19 00:51:14.490287 | orchestrator | 2026-03-19 00:51:14 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:14.490355 | orchestrator | 2026-03-19 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:17.537743 | orchestrator | 2026-03-19 00:51:17 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:17.541188 | orchestrator | 2026-03-19 00:51:17 | INFO  | Task 53a7cce6-85df-41ef-bd4f-687f95ec4253 is in state SUCCESS 2026-03-19 00:51:17.542629 | orchestrator | 2026-03-19 00:51:17.542674 | orchestrator | 2026-03-19 00:51:17.542680 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 00:51:17.542685 | orchestrator | 2026-03-19 00:51:17.542689 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 00:51:17.542694 | orchestrator | Thursday 19 March 2026 00:48:58 +0000 (0:00:00.154) 0:00:00.154 ******** 2026-03-19 00:51:17.542698 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:51:17.542704 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:51:17.542708 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:51:17.542712 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.542715 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.542719 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.542723 | orchestrator | 2026-03-19 00:51:17.542727 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 00:51:17.542731 | orchestrator | Thursday 19 March 2026 00:48:59 +0000 (0:00:00.548) 0:00:00.702 ******** 2026-03-19 00:51:17.542735 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-19 00:51:17.542740 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-19 00:51:17.542743 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-19 00:51:17.542747 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-19 00:51:17.542751 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-19 00:51:17.542755 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-19 00:51:17.542758 | orchestrator | 2026-03-19 00:51:17.542762 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-19 00:51:17.542766 | orchestrator | 2026-03-19 00:51:17.542770 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-19 00:51:17.542774 | orchestrator | Thursday 19 March 2026 00:49:00 +0000 (0:00:00.736) 0:00:01.439 ******** 2026-03-19 00:51:17.542778 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:51:17.542784 | orchestrator | 2026-03-19 00:51:17.542788 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-19 00:51:17.542791 | orchestrator | Thursday 19 March 2026 00:49:01 +0000 (0:00:00.970) 0:00:02.409 ******** 2026-03-19 00:51:17.542797 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542804 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542859 | orchestrator | 2026-03-19 00:51:17.542871 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-19 00:51:17.542875 | orchestrator | Thursday 19 March 2026 00:49:03 +0000 (0:00:01.966) 0:00:04.376 ******** 2026-03-19 00:51:17.542879 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542883 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542887 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542906 | orchestrator | 2026-03-19 00:51:17.542910 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-19 00:51:17.542913 | orchestrator | Thursday 19 March 2026 00:49:04 +0000 (0:00:01.610) 0:00:05.986 ******** 2026-03-19 00:51:17.542920 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542924 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542932 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542948 | orchestrator | 2026-03-19 00:51:17.542951 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-19 00:51:17.542955 | orchestrator | Thursday 19 March 2026 00:49:05 +0000 (0:00:01.114) 0:00:07.101 ******** 2026-03-19 00:51:17.542959 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542963 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542970 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.542986 | orchestrator | 2026-03-19 00:51:17.542992 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-19 00:51:17.542996 | orchestrator | Thursday 19 March 2026 00:49:07 +0000 (0:00:01.634) 0:00:08.735 ******** 2026-03-19 00:51:17.543000 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.543004 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.543007 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.543061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.543071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.543075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.543079 | orchestrator | 2026-03-19 00:51:17.543083 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-19 00:51:17.543087 | orchestrator | Thursday 19 March 2026 00:49:08 +0000 (0:00:01.181) 0:00:09.917 ******** 2026-03-19 00:51:17.543091 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:51:17.543095 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:51:17.543099 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:51:17.543103 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:51:17.543107 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:51:17.543111 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:51:17.543115 | orchestrator | 2026-03-19 00:51:17.543207 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-19 00:51:17.543213 | orchestrator | Thursday 19 March 2026 00:49:11 +0000 (0:00:02.590) 0:00:12.508 ******** 2026-03-19 00:51:17.543217 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-19 00:51:17.543222 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-19 00:51:17.543226 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-19 00:51:17.543230 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-19 00:51:17.543235 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-19 00:51:17.543239 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-19 00:51:17.543243 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 00:51:17.543248 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 00:51:17.543256 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 00:51:17.543261 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 00:51:17.543265 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 00:51:17.543269 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 00:51:17.543274 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-19 00:51:17.543280 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-19 00:51:17.543285 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-19 00:51:17.543289 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-19 00:51:17.543298 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-19 00:51:17.543302 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-19 00:51:17.543307 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 00:51:17.543313 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 00:51:17.543317 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 00:51:17.543322 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 00:51:17.543326 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 00:51:17.543330 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 00:51:17.543335 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 00:51:17.543339 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 00:51:17.543343 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 00:51:17.543348 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 00:51:17.543353 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 00:51:17.543357 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 00:51:17.543361 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 00:51:17.543366 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 00:51:17.543370 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 00:51:17.543375 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 00:51:17.543379 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 00:51:17.543384 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 00:51:17.543388 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-19 00:51:17.543395 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-19 00:51:17.543399 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-19 00:51:17.543403 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-19 00:51:17.543407 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-19 00:51:17.543411 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-19 00:51:17.543415 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-19 00:51:17.543420 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-19 00:51:17.543426 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-19 00:51:17.543433 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-19 00:51:17.543437 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-19 00:51:17.543441 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-19 00:51:17.543445 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-19 00:51:17.543449 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-19 00:51:17.543452 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-19 00:51:17.543456 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-19 00:51:17.543460 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-19 00:51:17.543464 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-19 00:51:17.543468 | orchestrator | 2026-03-19 00:51:17.543472 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 00:51:17.543475 | orchestrator | Thursday 19 March 2026 00:49:32 +0000 (0:00:20.929) 0:00:33.437 ******** 2026-03-19 00:51:17.543479 | orchestrator | 2026-03-19 00:51:17.543483 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 00:51:17.543487 | orchestrator | Thursday 19 March 2026 00:49:32 +0000 (0:00:00.140) 0:00:33.577 ******** 2026-03-19 00:51:17.543491 | orchestrator | 2026-03-19 00:51:17.543495 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 00:51:17.543498 | orchestrator | Thursday 19 March 2026 00:49:32 +0000 (0:00:00.059) 0:00:33.637 ******** 2026-03-19 00:51:17.543502 | orchestrator | 2026-03-19 00:51:17.543506 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 00:51:17.543510 | orchestrator | Thursday 19 March 2026 00:49:32 +0000 (0:00:00.060) 0:00:33.698 ******** 2026-03-19 00:51:17.543513 | orchestrator | 2026-03-19 00:51:17.543517 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 00:51:17.543521 | orchestrator | Thursday 19 March 2026 00:49:32 +0000 (0:00:00.069) 0:00:33.767 ******** 2026-03-19 00:51:17.543525 | orchestrator | 2026-03-19 00:51:17.543529 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 00:51:17.543532 | orchestrator | Thursday 19 March 2026 00:49:32 +0000 (0:00:00.120) 0:00:33.888 ******** 2026-03-19 00:51:17.543536 | orchestrator | 2026-03-19 00:51:17.543540 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-19 00:51:17.543543 | orchestrator | Thursday 19 March 2026 00:49:32 +0000 (0:00:00.067) 0:00:33.955 ******** 2026-03-19 00:51:17.543547 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:51:17.543551 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:51:17.543555 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:51:17.543559 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.543562 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.543566 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.543570 | orchestrator | 2026-03-19 00:51:17.543574 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-19 00:51:17.543578 | orchestrator | Thursday 19 March 2026 00:49:34 +0000 (0:00:02.282) 0:00:36.238 ******** 2026-03-19 00:51:17.543581 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:51:17.543585 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:51:17.543589 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:51:17.543597 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:51:17.543600 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:51:17.543604 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:51:17.543608 | orchestrator | 2026-03-19 00:51:17.543612 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-19 00:51:17.543616 | orchestrator | 2026-03-19 00:51:17.543619 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-19 00:51:17.543626 | orchestrator | Thursday 19 March 2026 00:50:04 +0000 (0:00:30.001) 0:01:06.239 ******** 2026-03-19 00:51:17.543630 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:51:17.543634 | orchestrator | 2026-03-19 00:51:17.543638 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-19 00:51:17.543641 | orchestrator | Thursday 19 March 2026 00:50:05 +0000 (0:00:00.504) 0:01:06.744 ******** 2026-03-19 00:51:17.543645 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:51:17.543649 | orchestrator | 2026-03-19 00:51:17.543653 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-19 00:51:17.543657 | orchestrator | Thursday 19 March 2026 00:50:06 +0000 (0:00:00.923) 0:01:07.668 ******** 2026-03-19 00:51:17.543661 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.543664 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.543668 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.543672 | orchestrator | 2026-03-19 00:51:17.543676 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-19 00:51:17.543680 | orchestrator | Thursday 19 March 2026 00:50:07 +0000 (0:00:00.992) 0:01:08.661 ******** 2026-03-19 00:51:17.543683 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.543687 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.543691 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.543697 | orchestrator | 2026-03-19 00:51:17.543701 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-19 00:51:17.543705 | orchestrator | Thursday 19 March 2026 00:50:07 +0000 (0:00:00.262) 0:01:08.923 ******** 2026-03-19 00:51:17.543709 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.543712 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.543716 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.543720 | orchestrator | 2026-03-19 00:51:17.543724 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-19 00:51:17.543728 | orchestrator | Thursday 19 March 2026 00:50:08 +0000 (0:00:00.476) 0:01:09.400 ******** 2026-03-19 00:51:17.543731 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.543735 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.543739 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.543743 | orchestrator | 2026-03-19 00:51:17.543746 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-19 00:51:17.543750 | orchestrator | Thursday 19 March 2026 00:50:08 +0000 (0:00:00.275) 0:01:09.675 ******** 2026-03-19 00:51:17.543754 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.543758 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.543761 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.543765 | orchestrator | 2026-03-19 00:51:17.543769 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-19 00:51:17.543773 | orchestrator | Thursday 19 March 2026 00:50:08 +0000 (0:00:00.297) 0:01:09.972 ******** 2026-03-19 00:51:17.543776 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.543780 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.543784 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.543788 | orchestrator | 2026-03-19 00:51:17.543791 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-19 00:51:17.543795 | orchestrator | Thursday 19 March 2026 00:50:08 +0000 (0:00:00.328) 0:01:10.301 ******** 2026-03-19 00:51:17.543799 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.543806 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.543810 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.543814 | orchestrator | 2026-03-19 00:51:17.543818 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-19 00:51:17.543821 | orchestrator | Thursday 19 March 2026 00:50:09 +0000 (0:00:00.285) 0:01:10.587 ******** 2026-03-19 00:51:17.543825 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.543829 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.543833 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.543836 | orchestrator | 2026-03-19 00:51:17.543840 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-19 00:51:17.543844 | orchestrator | Thursday 19 March 2026 00:50:09 +0000 (0:00:00.393) 0:01:10.980 ******** 2026-03-19 00:51:17.543848 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.543852 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.543855 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.543859 | orchestrator | 2026-03-19 00:51:17.543863 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-19 00:51:17.543867 | orchestrator | Thursday 19 March 2026 00:50:09 +0000 (0:00:00.360) 0:01:11.341 ******** 2026-03-19 00:51:17.543871 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.543874 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.543878 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.543882 | orchestrator | 2026-03-19 00:51:17.543886 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-19 00:51:17.543889 | orchestrator | Thursday 19 March 2026 00:50:10 +0000 (0:00:00.336) 0:01:11.678 ******** 2026-03-19 00:51:17.543893 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.543897 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.543901 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.543905 | orchestrator | 2026-03-19 00:51:17.543908 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-19 00:51:17.543912 | orchestrator | Thursday 19 March 2026 00:50:10 +0000 (0:00:00.289) 0:01:11.967 ******** 2026-03-19 00:51:17.543916 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.543920 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.543923 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.543927 | orchestrator | 2026-03-19 00:51:17.543931 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-19 00:51:17.543935 | orchestrator | Thursday 19 March 2026 00:50:11 +0000 (0:00:00.424) 0:01:12.391 ******** 2026-03-19 00:51:17.543939 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.543942 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.543946 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.543950 | orchestrator | 2026-03-19 00:51:17.543954 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-19 00:51:17.543957 | orchestrator | Thursday 19 March 2026 00:50:11 +0000 (0:00:00.251) 0:01:12.643 ******** 2026-03-19 00:51:17.543964 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.543968 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.543971 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.543975 | orchestrator | 2026-03-19 00:51:17.543979 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-19 00:51:17.543983 | orchestrator | Thursday 19 March 2026 00:50:11 +0000 (0:00:00.356) 0:01:13.000 ******** 2026-03-19 00:51:17.543987 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.543990 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.543994 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.543998 | orchestrator | 2026-03-19 00:51:17.544002 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-19 00:51:17.544006 | orchestrator | Thursday 19 March 2026 00:50:12 +0000 (0:00:00.384) 0:01:13.384 ******** 2026-03-19 00:51:17.544009 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.544013 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.544021 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.544024 | orchestrator | 2026-03-19 00:51:17.544028 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-19 00:51:17.544032 | orchestrator | Thursday 19 March 2026 00:50:12 +0000 (0:00:00.422) 0:01:13.807 ******** 2026-03-19 00:51:17.544036 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.544040 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.544046 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.544050 | orchestrator | 2026-03-19 00:51:17.544054 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-19 00:51:17.544058 | orchestrator | Thursday 19 March 2026 00:50:12 +0000 (0:00:00.342) 0:01:14.149 ******** 2026-03-19 00:51:17.544062 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:51:17.544065 | orchestrator | 2026-03-19 00:51:17.544069 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-19 00:51:17.544073 | orchestrator | Thursday 19 March 2026 00:50:13 +0000 (0:00:00.648) 0:01:14.798 ******** 2026-03-19 00:51:17.544077 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.544081 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.544084 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.544088 | orchestrator | 2026-03-19 00:51:17.544092 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-19 00:51:17.544096 | orchestrator | Thursday 19 March 2026 00:50:14 +0000 (0:00:00.577) 0:01:15.376 ******** 2026-03-19 00:51:17.544099 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.544103 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.544107 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.544111 | orchestrator | 2026-03-19 00:51:17.544114 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-19 00:51:17.544118 | orchestrator | Thursday 19 March 2026 00:50:14 +0000 (0:00:00.399) 0:01:15.775 ******** 2026-03-19 00:51:17.544140 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.544146 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.544151 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.544157 | orchestrator | 2026-03-19 00:51:17.544162 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-19 00:51:17.544168 | orchestrator | Thursday 19 March 2026 00:50:14 +0000 (0:00:00.336) 0:01:16.112 ******** 2026-03-19 00:51:17.544173 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.544179 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.544184 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.544190 | orchestrator | 2026-03-19 00:51:17.544197 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-19 00:51:17.544202 | orchestrator | Thursday 19 March 2026 00:50:15 +0000 (0:00:00.279) 0:01:16.392 ******** 2026-03-19 00:51:17.544208 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.544214 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.544219 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.544226 | orchestrator | 2026-03-19 00:51:17.544232 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-19 00:51:17.544238 | orchestrator | Thursday 19 March 2026 00:50:15 +0000 (0:00:00.395) 0:01:16.787 ******** 2026-03-19 00:51:17.544244 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.544251 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.544256 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.544260 | orchestrator | 2026-03-19 00:51:17.544264 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-19 00:51:17.544267 | orchestrator | Thursday 19 March 2026 00:50:15 +0000 (0:00:00.326) 0:01:17.114 ******** 2026-03-19 00:51:17.544271 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.544275 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.544279 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.544287 | orchestrator | 2026-03-19 00:51:17.544291 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-19 00:51:17.544294 | orchestrator | Thursday 19 March 2026 00:50:16 +0000 (0:00:00.275) 0:01:17.389 ******** 2026-03-19 00:51:17.544298 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.544302 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.544306 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.544309 | orchestrator | 2026-03-19 00:51:17.544313 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-19 00:51:17.544317 | orchestrator | Thursday 19 March 2026 00:50:16 +0000 (0:00:00.287) 0:01:17.677 ******** 2026-03-19 00:51:17.544321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544371 | orchestrator | 2026-03-19 00:51:17.544375 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-19 00:51:17.544379 | orchestrator | Thursday 19 March 2026 00:50:17 +0000 (0:00:01.478) 0:01:19.155 ******** 2026-03-19 00:51:17.544383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544429 | orchestrator | 2026-03-19 00:51:17.544433 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-19 00:51:17.544437 | orchestrator | Thursday 19 March 2026 00:50:21 +0000 (0:00:03.644) 0:01:22.800 ******** 2026-03-19 00:51:17.544441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544486 | orchestrator | 2026-03-19 00:51:17.544490 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-19 00:51:17.544494 | orchestrator | Thursday 19 March 2026 00:50:23 +0000 (0:00:02.237) 0:01:25.038 ******** 2026-03-19 00:51:17.544498 | orchestrator | 2026-03-19 00:51:17.544502 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-19 00:51:17.544505 | orchestrator | Thursday 19 March 2026 00:50:23 +0000 (0:00:00.059) 0:01:25.097 ******** 2026-03-19 00:51:17.544516 | orchestrator | 2026-03-19 00:51:17.544522 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-19 00:51:17.544528 | orchestrator | Thursday 19 March 2026 00:50:23 +0000 (0:00:00.059) 0:01:25.157 ******** 2026-03-19 00:51:17.544533 | orchestrator | 2026-03-19 00:51:17.544540 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-19 00:51:17.544545 | orchestrator | Thursday 19 March 2026 00:50:23 +0000 (0:00:00.064) 0:01:25.221 ******** 2026-03-19 00:51:17.544551 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:51:17.544557 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:51:17.544563 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:51:17.544568 | orchestrator | 2026-03-19 00:51:17.544574 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-19 00:51:17.544580 | orchestrator | Thursday 19 March 2026 00:50:26 +0000 (0:00:02.798) 0:01:28.019 ******** 2026-03-19 00:51:17.544586 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:51:17.544591 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:51:17.544597 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:51:17.544603 | orchestrator | 2026-03-19 00:51:17.544609 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-19 00:51:17.544615 | orchestrator | Thursday 19 March 2026 00:50:29 +0000 (0:00:02.784) 0:01:30.803 ******** 2026-03-19 00:51:17.544622 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:51:17.544627 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:51:17.544633 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:51:17.544638 | orchestrator | 2026-03-19 00:51:17.544644 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-19 00:51:17.544650 | orchestrator | Thursday 19 March 2026 00:50:36 +0000 (0:00:07.217) 0:01:38.021 ******** 2026-03-19 00:51:17.544656 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.544662 | orchestrator | 2026-03-19 00:51:17.544669 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-19 00:51:17.544674 | orchestrator | Thursday 19 March 2026 00:50:36 +0000 (0:00:00.117) 0:01:38.138 ******** 2026-03-19 00:51:17.544681 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.544687 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.544692 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.544701 | orchestrator | 2026-03-19 00:51:17.544710 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-19 00:51:17.544720 | orchestrator | Thursday 19 March 2026 00:50:37 +0000 (0:00:00.788) 0:01:38.927 ******** 2026-03-19 00:51:17.544725 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.544732 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.544737 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:51:17.544743 | orchestrator | 2026-03-19 00:51:17.544749 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-19 00:51:17.544755 | orchestrator | Thursday 19 March 2026 00:50:38 +0000 (0:00:00.541) 0:01:39.469 ******** 2026-03-19 00:51:17.544760 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.544766 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.544773 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.544779 | orchestrator | 2026-03-19 00:51:17.544785 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-19 00:51:17.544798 | orchestrator | Thursday 19 March 2026 00:50:38 +0000 (0:00:00.832) 0:01:40.301 ******** 2026-03-19 00:51:17.544804 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.544808 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.544812 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:51:17.544815 | orchestrator | 2026-03-19 00:51:17.544819 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-19 00:51:17.544823 | orchestrator | Thursday 19 March 2026 00:50:39 +0000 (0:00:00.606) 0:01:40.908 ******** 2026-03-19 00:51:17.544827 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.544831 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.544839 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.544843 | orchestrator | 2026-03-19 00:51:17.544847 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-19 00:51:17.544851 | orchestrator | Thursday 19 March 2026 00:50:40 +0000 (0:00:01.006) 0:01:41.914 ******** 2026-03-19 00:51:17.544854 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.544858 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.544862 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.544865 | orchestrator | 2026-03-19 00:51:17.544869 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-19 00:51:17.544873 | orchestrator | Thursday 19 March 2026 00:50:41 +0000 (0:00:00.886) 0:01:42.801 ******** 2026-03-19 00:51:17.544877 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.544880 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.544884 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.544888 | orchestrator | 2026-03-19 00:51:17.544892 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-19 00:51:17.544896 | orchestrator | Thursday 19 March 2026 00:50:41 +0000 (0:00:00.381) 0:01:43.183 ******** 2026-03-19 00:51:17.544900 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544904 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544908 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544912 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544916 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544920 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544930 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544934 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544945 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544949 | orchestrator | 2026-03-19 00:51:17.544953 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-19 00:51:17.544957 | orchestrator | Thursday 19 March 2026 00:50:43 +0000 (0:00:01.477) 0:01:44.661 ******** 2026-03-19 00:51:17.544961 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544965 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544968 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544972 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544994 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.544998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.545001 | orchestrator | 2026-03-19 00:51:17.545005 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-19 00:51:17.545009 | orchestrator | Thursday 19 March 2026 00:50:47 +0000 (0:00:03.780) 0:01:48.441 ******** 2026-03-19 00:51:17.545016 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.545021 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.545025 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.545029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.545033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.545037 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.545044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.545048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.545055 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 00:51:17.545059 | orchestrator | 2026-03-19 00:51:17.545063 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-19 00:51:17.545067 | orchestrator | Thursday 19 March 2026 00:50:49 +0000 (0:00:02.926) 0:01:51.368 ******** 2026-03-19 00:51:17.545071 | orchestrator | 2026-03-19 00:51:17.545075 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-19 00:51:17.545079 | orchestrator | Thursday 19 March 2026 00:50:50 +0000 (0:00:00.064) 0:01:51.432 ******** 2026-03-19 00:51:17.545082 | orchestrator | 2026-03-19 00:51:17.545086 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-19 00:51:17.545090 | orchestrator | Thursday 19 March 2026 00:50:50 +0000 (0:00:00.081) 0:01:51.514 ******** 2026-03-19 00:51:17.545094 | orchestrator | 2026-03-19 00:51:17.545097 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-19 00:51:17.545101 | orchestrator | Thursday 19 March 2026 00:50:50 +0000 (0:00:00.180) 0:01:51.695 ******** 2026-03-19 00:51:17.545105 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:51:17.545109 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:51:17.545112 | orchestrator | 2026-03-19 00:51:17.545119 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-19 00:51:17.545140 | orchestrator | Thursday 19 March 2026 00:50:56 +0000 (0:00:06.179) 0:01:57.874 ******** 2026-03-19 00:51:17.545147 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:51:17.545154 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:51:17.545162 | orchestrator | 2026-03-19 00:51:17.545166 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-19 00:51:17.545170 | orchestrator | Thursday 19 March 2026 00:51:02 +0000 (0:00:06.106) 0:02:03.981 ******** 2026-03-19 00:51:17.545173 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:51:17.545177 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:51:17.545181 | orchestrator | 2026-03-19 00:51:17.545185 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-19 00:51:17.545188 | orchestrator | Thursday 19 March 2026 00:51:08 +0000 (0:00:06.333) 0:02:10.315 ******** 2026-03-19 00:51:17.545193 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:51:17.545197 | orchestrator | 2026-03-19 00:51:17.545200 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-19 00:51:17.545204 | orchestrator | Thursday 19 March 2026 00:51:09 +0000 (0:00:00.118) 0:02:10.433 ******** 2026-03-19 00:51:17.545208 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.545211 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.545215 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.545220 | orchestrator | 2026-03-19 00:51:17.545223 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-19 00:51:17.545231 | orchestrator | Thursday 19 March 2026 00:51:09 +0000 (0:00:00.804) 0:02:11.238 ******** 2026-03-19 00:51:17.545235 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.545239 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.545243 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:51:17.545246 | orchestrator | 2026-03-19 00:51:17.545250 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-19 00:51:17.545254 | orchestrator | Thursday 19 March 2026 00:51:10 +0000 (0:00:00.822) 0:02:12.060 ******** 2026-03-19 00:51:17.545258 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.545261 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.545265 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.545269 | orchestrator | 2026-03-19 00:51:17.545273 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-19 00:51:17.545277 | orchestrator | Thursday 19 March 2026 00:51:11 +0000 (0:00:00.755) 0:02:12.816 ******** 2026-03-19 00:51:17.545280 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:51:17.545284 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:51:17.545288 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:51:17.545292 | orchestrator | 2026-03-19 00:51:17.545296 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-19 00:51:17.545300 | orchestrator | Thursday 19 March 2026 00:51:12 +0000 (0:00:00.641) 0:02:13.457 ******** 2026-03-19 00:51:17.545303 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.545307 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.545311 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.545315 | orchestrator | 2026-03-19 00:51:17.545319 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-19 00:51:17.545323 | orchestrator | Thursday 19 March 2026 00:51:12 +0000 (0:00:00.848) 0:02:14.306 ******** 2026-03-19 00:51:17.545326 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:51:17.545330 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:51:17.545334 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:51:17.545337 | orchestrator | 2026-03-19 00:51:17.545341 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:51:17.545345 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-19 00:51:17.545349 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-19 00:51:17.545353 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-19 00:51:17.545358 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:51:17.545361 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:51:17.545368 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 00:51:17.545372 | orchestrator | 2026-03-19 00:51:17.545376 | orchestrator | 2026-03-19 00:51:17.545380 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:51:17.545383 | orchestrator | Thursday 19 March 2026 00:51:14 +0000 (0:00:01.332) 0:02:15.639 ******** 2026-03-19 00:51:17.545387 | orchestrator | =============================================================================== 2026-03-19 00:51:17.545391 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 30.00s 2026-03-19 00:51:17.545395 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.93s 2026-03-19 00:51:17.545398 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.55s 2026-03-19 00:51:17.545405 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.98s 2026-03-19 00:51:17.545409 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.89s 2026-03-19 00:51:17.545412 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.78s 2026-03-19 00:51:17.545416 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.65s 2026-03-19 00:51:17.545423 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.93s 2026-03-19 00:51:17.545427 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.59s 2026-03-19 00:51:17.545431 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.28s 2026-03-19 00:51:17.545435 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.24s 2026-03-19 00:51:17.545439 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.97s 2026-03-19 00:51:17.545443 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.63s 2026-03-19 00:51:17.545446 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.61s 2026-03-19 00:51:17.545450 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.48s 2026-03-19 00:51:17.545454 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.48s 2026-03-19 00:51:17.545458 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.33s 2026-03-19 00:51:17.545462 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.18s 2026-03-19 00:51:17.545465 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.11s 2026-03-19 00:51:17.545469 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.01s 2026-03-19 00:51:17.545473 | orchestrator | 2026-03-19 00:51:17 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:17.545477 | orchestrator | 2026-03-19 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:20.575942 | orchestrator | 2026-03-19 00:51:20 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:20.578920 | orchestrator | 2026-03-19 00:51:20 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:20.578973 | orchestrator | 2026-03-19 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:23.609727 | orchestrator | 2026-03-19 00:51:23 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:23.612676 | orchestrator | 2026-03-19 00:51:23 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:23.612741 | orchestrator | 2026-03-19 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:26.637598 | orchestrator | 2026-03-19 00:51:26 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:26.638752 | orchestrator | 2026-03-19 00:51:26 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:26.638928 | orchestrator | 2026-03-19 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:29.683794 | orchestrator | 2026-03-19 00:51:29 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:29.685512 | orchestrator | 2026-03-19 00:51:29 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:29.685856 | orchestrator | 2026-03-19 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:32.720264 | orchestrator | 2026-03-19 00:51:32 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:32.722283 | orchestrator | 2026-03-19 00:51:32 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:32.722366 | orchestrator | 2026-03-19 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:35.758303 | orchestrator | 2026-03-19 00:51:35 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:35.761215 | orchestrator | 2026-03-19 00:51:35 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:35.761576 | orchestrator | 2026-03-19 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:38.800863 | orchestrator | 2026-03-19 00:51:38 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:38.802381 | orchestrator | 2026-03-19 00:51:38 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:38.802904 | orchestrator | 2026-03-19 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:41.862755 | orchestrator | 2026-03-19 00:51:41 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:41.862829 | orchestrator | 2026-03-19 00:51:41 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:41.862835 | orchestrator | 2026-03-19 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:44.898755 | orchestrator | 2026-03-19 00:51:44 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:44.901375 | orchestrator | 2026-03-19 00:51:44 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:44.901518 | orchestrator | 2026-03-19 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:47.948216 | orchestrator | 2026-03-19 00:51:47 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:47.949385 | orchestrator | 2026-03-19 00:51:47 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:47.949457 | orchestrator | 2026-03-19 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:50.994633 | orchestrator | 2026-03-19 00:51:50 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:50.996057 | orchestrator | 2026-03-19 00:51:50 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:50.996265 | orchestrator | 2026-03-19 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:54.039719 | orchestrator | 2026-03-19 00:51:54 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:54.041496 | orchestrator | 2026-03-19 00:51:54 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:54.041956 | orchestrator | 2026-03-19 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:51:57.076650 | orchestrator | 2026-03-19 00:51:57 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:51:57.078746 | orchestrator | 2026-03-19 00:51:57 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:51:57.078812 | orchestrator | 2026-03-19 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:00.115821 | orchestrator | 2026-03-19 00:52:00 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:00.117632 | orchestrator | 2026-03-19 00:52:00 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:00.117798 | orchestrator | 2026-03-19 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:03.161477 | orchestrator | 2026-03-19 00:52:03 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:03.163879 | orchestrator | 2026-03-19 00:52:03 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:03.163943 | orchestrator | 2026-03-19 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:06.204324 | orchestrator | 2026-03-19 00:52:06 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:06.206592 | orchestrator | 2026-03-19 00:52:06 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:06.206936 | orchestrator | 2026-03-19 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:09.243386 | orchestrator | 2026-03-19 00:52:09 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:09.244804 | orchestrator | 2026-03-19 00:52:09 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:09.244842 | orchestrator | 2026-03-19 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:12.288253 | orchestrator | 2026-03-19 00:52:12 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:12.290233 | orchestrator | 2026-03-19 00:52:12 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:12.290347 | orchestrator | 2026-03-19 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:15.332692 | orchestrator | 2026-03-19 00:52:15 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:15.334343 | orchestrator | 2026-03-19 00:52:15 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:15.334392 | orchestrator | 2026-03-19 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:18.383655 | orchestrator | 2026-03-19 00:52:18 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:18.385680 | orchestrator | 2026-03-19 00:52:18 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:18.385826 | orchestrator | 2026-03-19 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:21.425319 | orchestrator | 2026-03-19 00:52:21 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:21.428389 | orchestrator | 2026-03-19 00:52:21 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:21.428476 | orchestrator | 2026-03-19 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:24.491225 | orchestrator | 2026-03-19 00:52:24 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:24.493513 | orchestrator | 2026-03-19 00:52:24 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:24.493645 | orchestrator | 2026-03-19 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:27.528418 | orchestrator | 2026-03-19 00:52:27 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:27.528529 | orchestrator | 2026-03-19 00:52:27 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:27.528544 | orchestrator | 2026-03-19 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:30.573908 | orchestrator | 2026-03-19 00:52:30 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:30.576206 | orchestrator | 2026-03-19 00:52:30 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:30.576352 | orchestrator | 2026-03-19 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:33.626081 | orchestrator | 2026-03-19 00:52:33 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:33.628055 | orchestrator | 2026-03-19 00:52:33 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:33.628110 | orchestrator | 2026-03-19 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:36.664444 | orchestrator | 2026-03-19 00:52:36 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:36.666954 | orchestrator | 2026-03-19 00:52:36 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:36.667087 | orchestrator | 2026-03-19 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:39.709883 | orchestrator | 2026-03-19 00:52:39 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:39.711611 | orchestrator | 2026-03-19 00:52:39 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:39.711679 | orchestrator | 2026-03-19 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:42.744135 | orchestrator | 2026-03-19 00:52:42 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:42.745982 | orchestrator | 2026-03-19 00:52:42 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:42.746411 | orchestrator | 2026-03-19 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:45.803344 | orchestrator | 2026-03-19 00:52:45 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:45.805236 | orchestrator | 2026-03-19 00:52:45 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:45.805366 | orchestrator | 2026-03-19 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:48.844659 | orchestrator | 2026-03-19 00:52:48 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:48.845229 | orchestrator | 2026-03-19 00:52:48 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:48.845257 | orchestrator | 2026-03-19 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:51.884158 | orchestrator | 2026-03-19 00:52:51 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:51.884447 | orchestrator | 2026-03-19 00:52:51 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:51.884582 | orchestrator | 2026-03-19 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:54.938650 | orchestrator | 2026-03-19 00:52:54 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:54.940793 | orchestrator | 2026-03-19 00:52:54 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:54.940866 | orchestrator | 2026-03-19 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:52:57.978843 | orchestrator | 2026-03-19 00:52:57 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:52:57.980048 | orchestrator | 2026-03-19 00:52:57 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:52:57.980090 | orchestrator | 2026-03-19 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:01.036363 | orchestrator | 2026-03-19 00:53:01 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:01.038222 | orchestrator | 2026-03-19 00:53:01 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:01.038314 | orchestrator | 2026-03-19 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:04.097546 | orchestrator | 2026-03-19 00:53:04 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:04.101558 | orchestrator | 2026-03-19 00:53:04 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:04.101627 | orchestrator | 2026-03-19 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:07.148284 | orchestrator | 2026-03-19 00:53:07 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:07.150694 | orchestrator | 2026-03-19 00:53:07 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:07.150788 | orchestrator | 2026-03-19 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:10.186863 | orchestrator | 2026-03-19 00:53:10 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:10.188678 | orchestrator | 2026-03-19 00:53:10 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:10.188749 | orchestrator | 2026-03-19 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:13.231839 | orchestrator | 2026-03-19 00:53:13 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:13.232266 | orchestrator | 2026-03-19 00:53:13 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:13.232291 | orchestrator | 2026-03-19 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:16.278343 | orchestrator | 2026-03-19 00:53:16 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:16.279893 | orchestrator | 2026-03-19 00:53:16 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:16.280047 | orchestrator | 2026-03-19 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:19.325005 | orchestrator | 2026-03-19 00:53:19 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:19.326617 | orchestrator | 2026-03-19 00:53:19 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:19.327086 | orchestrator | 2026-03-19 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:22.368210 | orchestrator | 2026-03-19 00:53:22 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:22.370706 | orchestrator | 2026-03-19 00:53:22 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:22.370793 | orchestrator | 2026-03-19 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:25.413064 | orchestrator | 2026-03-19 00:53:25 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:25.414917 | orchestrator | 2026-03-19 00:53:25 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:25.414957 | orchestrator | 2026-03-19 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:28.465888 | orchestrator | 2026-03-19 00:53:28 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:28.468491 | orchestrator | 2026-03-19 00:53:28 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:28.468622 | orchestrator | 2026-03-19 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:31.519264 | orchestrator | 2026-03-19 00:53:31 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:31.520868 | orchestrator | 2026-03-19 00:53:31 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:31.521369 | orchestrator | 2026-03-19 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:34.579192 | orchestrator | 2026-03-19 00:53:34 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:34.581141 | orchestrator | 2026-03-19 00:53:34 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:34.581189 | orchestrator | 2026-03-19 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:37.633847 | orchestrator | 2026-03-19 00:53:37 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:37.633944 | orchestrator | 2026-03-19 00:53:37 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:37.633957 | orchestrator | 2026-03-19 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:40.678583 | orchestrator | 2026-03-19 00:53:40 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:40.682813 | orchestrator | 2026-03-19 00:53:40 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:40.682891 | orchestrator | 2026-03-19 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:43.726045 | orchestrator | 2026-03-19 00:53:43 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:43.727184 | orchestrator | 2026-03-19 00:53:43 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:43.727243 | orchestrator | 2026-03-19 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:46.762346 | orchestrator | 2026-03-19 00:53:46 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:46.764572 | orchestrator | 2026-03-19 00:53:46 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:46.764719 | orchestrator | 2026-03-19 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:49.807283 | orchestrator | 2026-03-19 00:53:49 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:49.809142 | orchestrator | 2026-03-19 00:53:49 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:49.809253 | orchestrator | 2026-03-19 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:52.856686 | orchestrator | 2026-03-19 00:53:52 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:52.859133 | orchestrator | 2026-03-19 00:53:52 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:52.859194 | orchestrator | 2026-03-19 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:55.904878 | orchestrator | 2026-03-19 00:53:55 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:55.908071 | orchestrator | 2026-03-19 00:53:55 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:55.908131 | orchestrator | 2026-03-19 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:53:58.953464 | orchestrator | 2026-03-19 00:53:58 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:53:58.955472 | orchestrator | 2026-03-19 00:53:58 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state STARTED 2026-03-19 00:53:58.955532 | orchestrator | 2026-03-19 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:02.007448 | orchestrator | 2026-03-19 00:54:02 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:02.010191 | orchestrator | 2026-03-19 00:54:02 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:02.012048 | orchestrator | 2026-03-19 00:54:02 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:02.020255 | orchestrator | 2026-03-19 00:54:02 | INFO  | Task 0328d96d-fd17-4a3e-8c43-1ae930108b61 is in state SUCCESS 2026-03-19 00:54:02.022409 | orchestrator | 2026-03-19 00:54:02.022523 | orchestrator | 2026-03-19 00:54:02.022558 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 00:54:02.022567 | orchestrator | 2026-03-19 00:54:02.022574 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 00:54:02.022581 | orchestrator | Thursday 19 March 2026 00:47:54 +0000 (0:00:00.413) 0:00:00.413 ******** 2026-03-19 00:54:02.022588 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.022595 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.022602 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.022608 | orchestrator | 2026-03-19 00:54:02.022615 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 00:54:02.022621 | orchestrator | Thursday 19 March 2026 00:47:55 +0000 (0:00:00.326) 0:00:00.740 ******** 2026-03-19 00:54:02.022628 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-19 00:54:02.022635 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-19 00:54:02.022641 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-19 00:54:02.022648 | orchestrator | 2026-03-19 00:54:02.022654 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-19 00:54:02.022661 | orchestrator | 2026-03-19 00:54:02.022667 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-19 00:54:02.022674 | orchestrator | Thursday 19 March 2026 00:47:55 +0000 (0:00:00.660) 0:00:01.402 ******** 2026-03-19 00:54:02.022680 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.022686 | orchestrator | 2026-03-19 00:54:02.022693 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-19 00:54:02.022700 | orchestrator | Thursday 19 March 2026 00:47:56 +0000 (0:00:01.272) 0:00:02.674 ******** 2026-03-19 00:54:02.022706 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.022713 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.022719 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.022726 | orchestrator | 2026-03-19 00:54:02.022732 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-19 00:54:02.022739 | orchestrator | Thursday 19 March 2026 00:47:58 +0000 (0:00:01.318) 0:00:03.992 ******** 2026-03-19 00:54:02.022746 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.022752 | orchestrator | 2026-03-19 00:54:02.022759 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-19 00:54:02.022765 | orchestrator | Thursday 19 March 2026 00:47:59 +0000 (0:00:00.879) 0:00:04.872 ******** 2026-03-19 00:54:02.022771 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.022778 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.022785 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.022791 | orchestrator | 2026-03-19 00:54:02.022798 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-19 00:54:02.022805 | orchestrator | Thursday 19 March 2026 00:48:01 +0000 (0:00:01.928) 0:00:06.801 ******** 2026-03-19 00:54:02.022811 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-19 00:54:02.022818 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-19 00:54:02.022825 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-19 00:54:02.022849 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-19 00:54:02.022971 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-19 00:54:02.022985 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-19 00:54:02.023020 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-19 00:54:02.023028 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-19 00:54:02.023037 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-19 00:54:02.023046 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-19 00:54:02.023056 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-19 00:54:02.023064 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-19 00:54:02.023072 | orchestrator | 2026-03-19 00:54:02.023079 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-19 00:54:02.023087 | orchestrator | Thursday 19 March 2026 00:48:04 +0000 (0:00:03.126) 0:00:09.927 ******** 2026-03-19 00:54:02.023094 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-19 00:54:02.023104 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-19 00:54:02.023112 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-19 00:54:02.023120 | orchestrator | 2026-03-19 00:54:02.023126 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-19 00:54:02.023134 | orchestrator | Thursday 19 March 2026 00:48:05 +0000 (0:00:00.885) 0:00:10.813 ******** 2026-03-19 00:54:02.023142 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-19 00:54:02.023151 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-19 00:54:02.023159 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-19 00:54:02.023167 | orchestrator | 2026-03-19 00:54:02.023178 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-19 00:54:02.023186 | orchestrator | Thursday 19 March 2026 00:48:06 +0000 (0:00:01.483) 0:00:12.297 ******** 2026-03-19 00:54:02.023193 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-19 00:54:02.023200 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.023222 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-19 00:54:02.023237 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.023252 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-19 00:54:02.023262 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.023270 | orchestrator | 2026-03-19 00:54:02.023278 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-19 00:54:02.023286 | orchestrator | Thursday 19 March 2026 00:48:07 +0000 (0:00:00.785) 0:00:13.082 ******** 2026-03-19 00:54:02.023296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 00:54:02.023308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 00:54:02.023325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 00:54:02.023333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 00:54:02.023341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 00:54:02.023353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 00:54:02.023363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 00:54:02.023371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 00:54:02.023378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 00:54:02.023389 | orchestrator | 2026-03-19 00:54:02.023396 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-19 00:54:02.023403 | orchestrator | Thursday 19 March 2026 00:48:09 +0000 (0:00:01.660) 0:00:14.742 ******** 2026-03-19 00:54:02.023409 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.023416 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.023422 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.023437 | orchestrator | 2026-03-19 00:54:02.023444 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-19 00:54:02.023451 | orchestrator | Thursday 19 March 2026 00:48:10 +0000 (0:00:01.075) 0:00:15.818 ******** 2026-03-19 00:54:02.023458 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-19 00:54:02.023464 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-19 00:54:02.023471 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-19 00:54:02.023477 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-19 00:54:02.023483 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-19 00:54:02.023490 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-19 00:54:02.023496 | orchestrator | 2026-03-19 00:54:02.023503 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-19 00:54:02.023509 | orchestrator | Thursday 19 March 2026 00:48:12 +0000 (0:00:02.237) 0:00:18.055 ******** 2026-03-19 00:54:02.023516 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.023522 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.023587 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.023596 | orchestrator | 2026-03-19 00:54:02.023603 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-19 00:54:02.023610 | orchestrator | Thursday 19 March 2026 00:48:14 +0000 (0:00:01.970) 0:00:20.026 ******** 2026-03-19 00:54:02.023618 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.023625 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.023632 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.023639 | orchestrator | 2026-03-19 00:54:02.023646 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-19 00:54:02.023653 | orchestrator | Thursday 19 March 2026 00:48:16 +0000 (0:00:02.092) 0:00:22.119 ******** 2026-03-19 00:54:02.023661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.023679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.023737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.023747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__88d444fa7ba339b79ade2198313ecf53ae1d5110', '__omit_place_holder__88d444fa7ba339b79ade2198313ecf53ae1d5110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 00:54:02.023754 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.023762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.023770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.023778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.023785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__88d444fa7ba339b79ade2198313ecf53ae1d5110', '__omit_place_holder__88d444fa7ba339b79ade2198313ecf53ae1d5110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 00:54:02.023792 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.023809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.023822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.023830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.023838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__88d444fa7ba339b79ade2198313ecf53ae1d5110', '__omit_place_holder__88d444fa7ba339b79ade2198313ecf53ae1d5110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 00:54:02.023845 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.023852 | orchestrator | 2026-03-19 00:54:02.023860 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-19 00:54:02.023867 | orchestrator | Thursday 19 March 2026 00:48:17 +0000 (0:00:00.872) 0:00:22.992 ******** 2026-03-19 00:54:02.023874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 00:54:02.023882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 00:54:02.023930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 00:54:02.023938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 00:54:02.023944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.023950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__88d444fa7ba339b79ade2198313ecf53ae1d5110', '__omit_place_holder__88d444fa7ba339b79ade2198313ecf53ae1d5110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 00:54:02.023957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 00:54:02.023963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.023970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__88d444fa7ba339b79ade2198313ecf53ae1d5110', '__omit_place_holder__88d444fa7ba339b79ade2198313ecf53ae1d5110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 00:54:02.023987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 00:54:02.023995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.024002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__88d444fa7ba339b79ade2198313ecf53ae1d5110', '__omit_place_holder__88d444fa7ba339b79ade2198313ecf53ae1d5110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 00:54:02.024008 | orchestrator | 2026-03-19 00:54:02.024015 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-19 00:54:02.024022 | orchestrator | Thursday 19 March 2026 00:48:20 +0000 (0:00:03.657) 0:00:26.650 ******** 2026-03-19 00:54:02.024029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 00:54:02.024036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 00:54:02.024043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 00:54:02.024060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 00:54:02.024068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 00:54:02.024075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 00:54:02.024082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 00:54:02.024090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 00:54:02.024097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 00:54:02.024107 | orchestrator | 2026-03-19 00:54:02.024114 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-19 00:54:02.024121 | orchestrator | Thursday 19 March 2026 00:48:24 +0000 (0:00:03.887) 0:00:30.537 ******** 2026-03-19 00:54:02.024128 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-19 00:54:02.024134 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-19 00:54:02.024141 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-19 00:54:02.024148 | orchestrator | 2026-03-19 00:54:02.024155 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-19 00:54:02.024161 | orchestrator | Thursday 19 March 2026 00:48:27 +0000 (0:00:02.541) 0:00:33.079 ******** 2026-03-19 00:54:02.024167 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-19 00:54:02.024174 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-19 00:54:02.024181 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-19 00:54:02.024188 | orchestrator | 2026-03-19 00:54:02.024462 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-19 00:54:02.024480 | orchestrator | Thursday 19 March 2026 00:48:31 +0000 (0:00:03.769) 0:00:36.848 ******** 2026-03-19 00:54:02.024488 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.024495 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.024503 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.024510 | orchestrator | 2026-03-19 00:54:02.024516 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-19 00:54:02.024524 | orchestrator | Thursday 19 March 2026 00:48:32 +0000 (0:00:00.938) 0:00:37.787 ******** 2026-03-19 00:54:02.024531 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-19 00:54:02.024539 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-19 00:54:02.024547 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-19 00:54:02.024554 | orchestrator | 2026-03-19 00:54:02.024561 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-19 00:54:02.024568 | orchestrator | Thursday 19 March 2026 00:48:34 +0000 (0:00:02.231) 0:00:40.019 ******** 2026-03-19 00:54:02.024574 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-19 00:54:02.024582 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-19 00:54:02.024589 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-19 00:54:02.024596 | orchestrator | 2026-03-19 00:54:02.024604 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-19 00:54:02.024611 | orchestrator | Thursday 19 March 2026 00:48:36 +0000 (0:00:02.103) 0:00:42.122 ******** 2026-03-19 00:54:02.024618 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-19 00:54:02.024625 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-19 00:54:02.024632 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-19 00:54:02.024639 | orchestrator | 2026-03-19 00:54:02.024647 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-19 00:54:02.024653 | orchestrator | Thursday 19 March 2026 00:48:38 +0000 (0:00:02.138) 0:00:44.261 ******** 2026-03-19 00:54:02.024660 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-19 00:54:02.024674 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-19 00:54:02.024682 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-19 00:54:02.024689 | orchestrator | 2026-03-19 00:54:02.024696 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-19 00:54:02.024703 | orchestrator | Thursday 19 March 2026 00:48:40 +0000 (0:00:02.370) 0:00:46.631 ******** 2026-03-19 00:54:02.024711 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.024718 | orchestrator | 2026-03-19 00:54:02.024725 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-19 00:54:02.024733 | orchestrator | Thursday 19 March 2026 00:48:41 +0000 (0:00:00.817) 0:00:47.449 ******** 2026-03-19 00:54:02.024741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 00:54:02.024749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 00:54:02.024767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 00:54:02.024776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 00:54:02.024784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 00:54:02.024823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 00:54:02.024832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 00:54:02.024839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 00:54:02.024847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 00:54:02.024854 | orchestrator | 2026-03-19 00:54:02.024860 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-19 00:54:02.024867 | orchestrator | Thursday 19 March 2026 00:48:45 +0000 (0:00:03.342) 0:00:50.791 ******** 2026-03-19 00:54:02.024882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.024904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.024911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.024925 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.024932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.024939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.024945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.024951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.024965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.024971 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.024978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.024989 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.024995 | orchestrator | 2026-03-19 00:54:02.025002 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-19 00:54:02.025009 | orchestrator | Thursday 19 March 2026 00:48:45 +0000 (0:00:00.420) 0:00:51.211 ******** 2026-03-19 00:54:02.025015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.025022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.025029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.025035 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.025083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.025099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.025107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.025119 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.025126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.025134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.025141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.025149 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.025155 | orchestrator | 2026-03-19 00:54:02.025162 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-19 00:54:02.025169 | orchestrator | Thursday 19 March 2026 00:48:46 +0000 (0:00:01.373) 0:00:52.585 ******** 2026-03-19 00:54:02.025176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.025188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.025198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.025210 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.025218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.025226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.025233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.025241 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.025249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.025256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.025302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.025310 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.025323 | orchestrator | 2026-03-19 00:54:02.025330 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-19 00:54:02.025337 | orchestrator | Thursday 19 March 2026 00:48:48 +0000 (0:00:01.418) 0:00:54.004 ******** 2026-03-19 00:54:02.025349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.025357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.025363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.025370 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.025377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.025384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.025391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.025401 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.025415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.025422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.025429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.025436 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.025443 | orchestrator | 2026-03-19 00:54:02.025450 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-19 00:54:02.025456 | orchestrator | Thursday 19 March 2026 00:48:49 +0000 (0:00:01.446) 0:00:55.451 ******** 2026-03-19 00:54:02.025463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.025470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.025477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.025488 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.026505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.026579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.026589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.026596 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.026603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.026625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.026634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.026641 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.026648 | orchestrator | 2026-03-19 00:54:02.026655 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-19 00:54:02.026681 | orchestrator | Thursday 19 March 2026 00:48:51 +0000 (0:00:01.326) 0:00:56.778 ******** 2026-03-19 00:54:02.026689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.026718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.026726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.026733 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.026740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.026751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.026764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.026771 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.026778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.026792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.026804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.026811 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.026817 | orchestrator | 2026-03-19 00:54:02.026824 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-19 00:54:02.026830 | orchestrator | Thursday 19 March 2026 00:48:51 +0000 (0:00:00.549) 0:00:57.328 ******** 2026-03-19 00:54:02.026837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.026844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.026850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.026857 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.026864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.026875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.026919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.026928 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.026935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.026942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.026948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.026954 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.026960 | orchestrator | 2026-03-19 00:54:02.026966 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-19 00:54:02.026973 | orchestrator | Thursday 19 March 2026 00:48:52 +0000 (0:00:00.708) 0:00:58.036 ******** 2026-03-19 00:54:02.026980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.026991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.026998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.027004 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.027017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.027024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.027031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.027037 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.027044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 00:54:02.027054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 00:54:02.027061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 00:54:02.027068 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.027074 | orchestrator | 2026-03-19 00:54:02.027142 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-19 00:54:02.027150 | orchestrator | Thursday 19 March 2026 00:48:53 +0000 (0:00:01.141) 0:00:59.178 ******** 2026-03-19 00:54:02.027157 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-19 00:54:02.027194 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-19 00:54:02.027204 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-19 00:54:02.027211 | orchestrator | 2026-03-19 00:54:02.027222 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-19 00:54:02.027230 | orchestrator | Thursday 19 March 2026 00:48:54 +0000 (0:00:01.474) 0:01:00.653 ******** 2026-03-19 00:54:02.027238 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-19 00:54:02.027245 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-19 00:54:02.027253 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-19 00:54:02.027261 | orchestrator | 2026-03-19 00:54:02.027269 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-19 00:54:02.027276 | orchestrator | Thursday 19 March 2026 00:48:56 +0000 (0:00:01.281) 0:01:01.934 ******** 2026-03-19 00:54:02.027284 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-19 00:54:02.027291 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-19 00:54:02.027299 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.027307 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-19 00:54:02.027314 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-19 00:54:02.027321 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-19 00:54:02.027329 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.027336 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-19 00:54:02.027350 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.027357 | orchestrator | 2026-03-19 00:54:02.027365 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-19 00:54:02.027374 | orchestrator | Thursday 19 March 2026 00:48:57 +0000 (0:00:01.276) 0:01:03.211 ******** 2026-03-19 00:54:02.027381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 00:54:02.027389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 00:54:02.027398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 00:54:02.027414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 00:54:02.027422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 00:54:02.027430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 00:54:02.027442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 00:54:02.027451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 00:54:02.027460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 00:54:02.027469 | orchestrator | 2026-03-19 00:54:02.027477 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-19 00:54:02.027486 | orchestrator | Thursday 19 March 2026 00:49:00 +0000 (0:00:02.792) 0:01:06.004 ******** 2026-03-19 00:54:02.027494 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.027501 | orchestrator | 2026-03-19 00:54:02.027508 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-19 00:54:02.027515 | orchestrator | Thursday 19 March 2026 00:49:00 +0000 (0:00:00.585) 0:01:06.589 ******** 2026-03-19 00:54:02.027523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 00:54:02.027547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.027555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.027566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.027573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 00:54:02.027580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.027587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 00:54:02.027599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.027607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.027618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.027624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.027631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.027638 | orchestrator | 2026-03-19 00:54:02.027668 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-19 00:54:02.027675 | orchestrator | Thursday 19 March 2026 00:49:04 +0000 (0:00:03.926) 0:01:10.515 ******** 2026-03-19 00:54:02.027694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-19 00:54:02.027708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.027720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.027727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.027733 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.027739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-19 00:54:02.027767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.027774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.027781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.027788 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.027801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-19 00:54:02.027831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.027854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.027862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.027869 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.027876 | orchestrator | 2026-03-19 00:54:02.027883 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-19 00:54:02.027901 | orchestrator | Thursday 19 March 2026 00:49:05 +0000 (0:00:00.761) 0:01:11.277 ******** 2026-03-19 00:54:02.027908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-19 00:54:02.027915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-19 00:54:02.027922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-19 00:54:02.027929 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.027936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-19 00:54:02.027942 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.027947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-19 00:54:02.027958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-19 00:54:02.027964 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.027970 | orchestrator | 2026-03-19 00:54:02.027984 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-19 00:54:02.027990 | orchestrator | Thursday 19 March 2026 00:49:06 +0000 (0:00:01.001) 0:01:12.279 ******** 2026-03-19 00:54:02.027996 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.028003 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.028009 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.028015 | orchestrator | 2026-03-19 00:54:02.028022 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-19 00:54:02.028028 | orchestrator | Thursday 19 March 2026 00:49:07 +0000 (0:00:01.403) 0:01:13.683 ******** 2026-03-19 00:54:02.028034 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.028040 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.028046 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.028053 | orchestrator | 2026-03-19 00:54:02.028059 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-19 00:54:02.028065 | orchestrator | Thursday 19 March 2026 00:49:09 +0000 (0:00:01.802) 0:01:15.485 ******** 2026-03-19 00:54:02.028071 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.028078 | orchestrator | 2026-03-19 00:54:02.028084 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-19 00:54:02.028090 | orchestrator | Thursday 19 March 2026 00:49:10 +0000 (0:00:00.555) 0:01:16.040 ******** 2026-03-19 00:54:02.028097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.028104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.028111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.028122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.028134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.028142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.028149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.028156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.028163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.028174 | orchestrator | 2026-03-19 00:54:02.028181 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-19 00:54:02.028187 | orchestrator | Thursday 19 March 2026 00:49:15 +0000 (0:00:04.774) 0:01:20.815 ******** 2026-03-19 00:54:02.028200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.028207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.028214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.028221 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.028228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.028235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.028245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.028252 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.028265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.028273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.028280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.028286 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.028293 | orchestrator | 2026-03-19 00:54:02.028299 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-19 00:54:02.028305 | orchestrator | Thursday 19 March 2026 00:49:16 +0000 (0:00:00.957) 0:01:21.772 ******** 2026-03-19 00:54:02.028313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-19 00:54:02.028323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-19 00:54:02.028330 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.028337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-19 00:54:02.028343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-19 00:54:02.028350 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.028356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-19 00:54:02.028363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-19 00:54:02.028370 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.028376 | orchestrator | 2026-03-19 00:54:02.028383 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-19 00:54:02.028390 | orchestrator | Thursday 19 March 2026 00:49:16 +0000 (0:00:00.721) 0:01:22.494 ******** 2026-03-19 00:54:02.028396 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.028402 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.028409 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.028415 | orchestrator | 2026-03-19 00:54:02.028422 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-19 00:54:02.028428 | orchestrator | Thursday 19 March 2026 00:49:18 +0000 (0:00:01.268) 0:01:23.762 ******** 2026-03-19 00:54:02.028500 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.028507 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.028514 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.028520 | orchestrator | 2026-03-19 00:54:02.028534 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-19 00:54:02.028541 | orchestrator | Thursday 19 March 2026 00:49:19 +0000 (0:00:01.764) 0:01:25.526 ******** 2026-03-19 00:54:02.028548 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.028555 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.028561 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.028568 | orchestrator | 2026-03-19 00:54:02.028575 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-19 00:54:02.028581 | orchestrator | Thursday 19 March 2026 00:49:20 +0000 (0:00:00.272) 0:01:25.799 ******** 2026-03-19 00:54:02.028588 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.028595 | orchestrator | 2026-03-19 00:54:02.028602 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-19 00:54:02.028609 | orchestrator | Thursday 19 March 2026 00:49:20 +0000 (0:00:00.786) 0:01:26.586 ******** 2026-03-19 00:54:02.028616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-19 00:54:02.028628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-19 00:54:02.028636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-19 00:54:02.028643 | orchestrator | 2026-03-19 00:54:02.028650 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-19 00:54:02.028657 | orchestrator | Thursday 19 March 2026 00:49:23 +0000 (0:00:02.377) 0:01:28.963 ******** 2026-03-19 00:54:02.029352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-19 00:54:02.029380 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.029388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-19 00:54:02.029395 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.029411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-19 00:54:02.029418 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.029424 | orchestrator | 2026-03-19 00:54:02.029431 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-19 00:54:02.029436 | orchestrator | Thursday 19 March 2026 00:49:24 +0000 (0:00:01.427) 0:01:30.391 ******** 2026-03-19 00:54:02.029443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-19 00:54:02.029450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-19 00:54:02.029457 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.029463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-19 00:54:02.029469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-19 00:54:02.029475 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.029497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-19 00:54:02.029504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-19 00:54:02.029510 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.029516 | orchestrator | 2026-03-19 00:54:02.029527 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-19 00:54:02.029534 | orchestrator | Thursday 19 March 2026 00:49:26 +0000 (0:00:01.672) 0:01:32.064 ******** 2026-03-19 00:54:02.029540 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.029546 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.029552 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.029558 | orchestrator | 2026-03-19 00:54:02.029565 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-19 00:54:02.029571 | orchestrator | Thursday 19 March 2026 00:49:26 +0000 (0:00:00.367) 0:01:32.431 ******** 2026-03-19 00:54:02.029578 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.029584 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.029619 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.029626 | orchestrator | 2026-03-19 00:54:02.029632 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-19 00:54:02.029654 | orchestrator | Thursday 19 March 2026 00:49:27 +0000 (0:00:01.044) 0:01:33.476 ******** 2026-03-19 00:54:02.029662 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.029668 | orchestrator | 2026-03-19 00:54:02.029675 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-19 00:54:02.029682 | orchestrator | Thursday 19 March 2026 00:49:28 +0000 (0:00:00.782) 0:01:34.259 ******** 2026-03-19 00:54:02.029763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.029775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.029784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.029803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.029820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.029828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.029837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.029844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.029857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.029871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.029878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.029898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.029906 | orchestrator | 2026-03-19 00:54:02.029913 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-19 00:54:02.029920 | orchestrator | Thursday 19 March 2026 00:49:31 +0000 (0:00:03.174) 0:01:37.433 ******** 2026-03-19 00:54:02.029927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.029933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.029949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.029975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.029984 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.029991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.029999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030072 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.030089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.030098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030118 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.030124 | orchestrator | 2026-03-19 00:54:02.030131 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-19 00:54:02.030138 | orchestrator | Thursday 19 March 2026 00:49:32 +0000 (0:00:00.967) 0:01:38.401 ******** 2026-03-19 00:54:02.030145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-19 00:54:02.030153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-19 00:54:02.030165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-19 00:54:02.030174 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.030181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-19 00:54:02.030188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-19 00:54:02.030235 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.030252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-19 00:54:02.030261 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.030269 | orchestrator | 2026-03-19 00:54:02.030277 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-19 00:54:02.030286 | orchestrator | Thursday 19 March 2026 00:49:34 +0000 (0:00:01.468) 0:01:39.870 ******** 2026-03-19 00:54:02.030294 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.030302 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.030311 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.030319 | orchestrator | 2026-03-19 00:54:02.030328 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-19 00:54:02.030335 | orchestrator | Thursday 19 March 2026 00:49:35 +0000 (0:00:01.308) 0:01:41.178 ******** 2026-03-19 00:54:02.030375 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.030383 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.030417 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.030423 | orchestrator | 2026-03-19 00:54:02.030430 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-19 00:54:02.030437 | orchestrator | Thursday 19 March 2026 00:49:37 +0000 (0:00:01.845) 0:01:43.023 ******** 2026-03-19 00:54:02.030445 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.030452 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.030459 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.030467 | orchestrator | 2026-03-19 00:54:02.030474 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-19 00:54:02.030482 | orchestrator | Thursday 19 March 2026 00:49:37 +0000 (0:00:00.355) 0:01:43.378 ******** 2026-03-19 00:54:02.030490 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.030497 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.030504 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.030511 | orchestrator | 2026-03-19 00:54:02.030518 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-19 00:54:02.030524 | orchestrator | Thursday 19 March 2026 00:49:37 +0000 (0:00:00.275) 0:01:43.654 ******** 2026-03-19 00:54:02.030530 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.030537 | orchestrator | 2026-03-19 00:54:02.030544 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-19 00:54:02.030552 | orchestrator | Thursday 19 March 2026 00:49:38 +0000 (0:00:00.824) 0:01:44.478 ******** 2026-03-19 00:54:02.030560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 00:54:02.030575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 00:54:02.030584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 00:54:02.030652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 00:54:02.030682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 00:54:02.030732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 00:54:02.030751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030787 | orchestrator | 2026-03-19 00:54:02.030793 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-19 00:54:02.030799 | orchestrator | Thursday 19 March 2026 00:49:44 +0000 (0:00:05.239) 0:01:49.718 ******** 2026-03-19 00:54:02.030805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 00:54:02.030824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 00:54:02.030831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 00:54:02.030842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 00:54:02.030848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.030934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.031006 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.031014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.031022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.031029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.031036 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.031050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 00:54:02.031057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 00:54:02.031069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.031077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.031084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.031091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.031106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.031133 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.031148 | orchestrator | 2026-03-19 00:54:02.031155 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-19 00:54:02.031163 | orchestrator | Thursday 19 March 2026 00:49:45 +0000 (0:00:01.394) 0:01:51.112 ******** 2026-03-19 00:54:02.031177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-19 00:54:02.031188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-19 00:54:02.031196 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.031247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-19 00:54:02.031256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-19 00:54:02.031263 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.031269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-19 00:54:02.031276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-19 00:54:02.031283 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.031290 | orchestrator | 2026-03-19 00:54:02.031297 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-19 00:54:02.031304 | orchestrator | Thursday 19 March 2026 00:49:47 +0000 (0:00:02.243) 0:01:53.356 ******** 2026-03-19 00:54:02.031311 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.031318 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.031325 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.031332 | orchestrator | 2026-03-19 00:54:02.031339 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-19 00:54:02.031346 | orchestrator | Thursday 19 March 2026 00:49:48 +0000 (0:00:01.177) 0:01:54.533 ******** 2026-03-19 00:54:02.031353 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.031361 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.031368 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.031375 | orchestrator | 2026-03-19 00:54:02.031382 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-19 00:54:02.031389 | orchestrator | Thursday 19 March 2026 00:49:50 +0000 (0:00:01.847) 0:01:56.381 ******** 2026-03-19 00:54:02.031396 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.031403 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.031409 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.031416 | orchestrator | 2026-03-19 00:54:02.031423 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-19 00:54:02.031430 | orchestrator | Thursday 19 March 2026 00:49:50 +0000 (0:00:00.260) 0:01:56.641 ******** 2026-03-19 00:54:02.031437 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.031444 | orchestrator | 2026-03-19 00:54:02.031451 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-19 00:54:02.031473 | orchestrator | Thursday 19 March 2026 00:49:51 +0000 (0:00:00.959) 0:01:57.600 ******** 2026-03-19 00:54:02.031520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 00:54:02.031538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 00:54:02.031555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 00:54:02.031583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 00:54:02.031593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 00:54:02.031652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 00:54:02.031664 | orchestrator | 2026-03-19 00:54:02.031671 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-19 00:54:02.031679 | orchestrator | Thursday 19 March 2026 00:49:57 +0000 (0:00:05.438) 0:02:03.039 ******** 2026-03-19 00:54:02.031687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 00:54:02.031739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 00:54:02.031755 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.031793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 00:54:02.031829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 00:54:02.031846 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.031854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 00:54:02.031867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 00:54:02.031879 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.031919 | orchestrator | 2026-03-19 00:54:02.031931 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-19 00:54:02.031938 | orchestrator | Thursday 19 March 2026 00:50:01 +0000 (0:00:04.274) 0:02:07.313 ******** 2026-03-19 00:54:02.031945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 00:54:02.031952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 00:54:02.031958 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.031965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 00:54:02.031980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 00:54:02.031988 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.031994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 00:54:02.032006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 00:54:02.032013 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.032019 | orchestrator | 2026-03-19 00:54:02.032025 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-19 00:54:02.032032 | orchestrator | Thursday 19 March 2026 00:50:05 +0000 (0:00:04.189) 0:02:11.502 ******** 2026-03-19 00:54:02.032038 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.032045 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.032071 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.032078 | orchestrator | 2026-03-19 00:54:02.032085 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-19 00:54:02.032092 | orchestrator | Thursday 19 March 2026 00:50:07 +0000 (0:00:01.719) 0:02:13.222 ******** 2026-03-19 00:54:02.032098 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.032104 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.032110 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.032117 | orchestrator | 2026-03-19 00:54:02.032124 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-19 00:54:02.032145 | orchestrator | Thursday 19 March 2026 00:50:09 +0000 (0:00:01.799) 0:02:15.021 ******** 2026-03-19 00:54:02.032153 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.032159 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.032166 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.032173 | orchestrator | 2026-03-19 00:54:02.032180 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-19 00:54:02.032187 | orchestrator | Thursday 19 March 2026 00:50:09 +0000 (0:00:00.228) 0:02:15.249 ******** 2026-03-19 00:54:02.032193 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.032200 | orchestrator | 2026-03-19 00:54:02.032207 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-19 00:54:02.032213 | orchestrator | Thursday 19 March 2026 00:50:10 +0000 (0:00:01.088) 0:02:16.337 ******** 2026-03-19 00:54:02.032221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 00:54:02.032229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 00:54:02.032238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 00:54:02.032251 | orchestrator | 2026-03-19 00:54:02.032310 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-19 00:54:02.032348 | orchestrator | Thursday 19 March 2026 00:50:14 +0000 (0:00:03.771) 0:02:20.109 ******** 2026-03-19 00:54:02.032356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 00:54:02.032364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 00:54:02.032381 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.032392 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.032400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 00:54:02.032414 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.032420 | orchestrator | 2026-03-19 00:54:02.032427 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-19 00:54:02.032434 | orchestrator | Thursday 19 March 2026 00:50:14 +0000 (0:00:00.348) 0:02:20.457 ******** 2026-03-19 00:54:02.032442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-19 00:54:02.032450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-19 00:54:02.032457 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.032464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-19 00:54:02.032497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-19 00:54:02.032506 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.032513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-19 00:54:02.032544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-19 00:54:02.032552 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.032560 | orchestrator | 2026-03-19 00:54:02.032567 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-19 00:54:02.032575 | orchestrator | Thursday 19 March 2026 00:50:15 +0000 (0:00:00.694) 0:02:21.151 ******** 2026-03-19 00:54:02.032582 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.032590 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.032597 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.032604 | orchestrator | 2026-03-19 00:54:02.032612 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-19 00:54:02.032619 | orchestrator | Thursday 19 March 2026 00:50:16 +0000 (0:00:01.241) 0:02:22.392 ******** 2026-03-19 00:54:02.032627 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.032634 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.032642 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.032649 | orchestrator | 2026-03-19 00:54:02.032656 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-19 00:54:02.032664 | orchestrator | Thursday 19 March 2026 00:50:18 +0000 (0:00:01.980) 0:02:24.373 ******** 2026-03-19 00:54:02.032672 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.032679 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.032686 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.032693 | orchestrator | 2026-03-19 00:54:02.032701 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-19 00:54:02.032729 | orchestrator | Thursday 19 March 2026 00:50:18 +0000 (0:00:00.307) 0:02:24.680 ******** 2026-03-19 00:54:02.032737 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.032743 | orchestrator | 2026-03-19 00:54:02.032750 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-19 00:54:02.032757 | orchestrator | Thursday 19 March 2026 00:50:19 +0000 (0:00:00.951) 0:02:25.631 ******** 2026-03-19 00:54:02.032783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 00:54:02.032799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 00:54:02.032854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 00:54:02.032869 | orchestrator | 2026-03-19 00:54:02.032876 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-19 00:54:02.032883 | orchestrator | Thursday 19 March 2026 00:50:23 +0000 (0:00:03.188) 0:02:28.820 ******** 2026-03-19 00:54:02.032908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 00:54:02.032958 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.032968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 00:54:02.032980 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.032995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 00:54:02.033003 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.033014 | orchestrator | 2026-03-19 00:54:02.033020 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-19 00:54:02.033027 | orchestrator | Thursday 19 March 2026 00:50:23 +0000 (0:00:00.578) 0:02:29.399 ******** 2026-03-19 00:54:02.033034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-19 00:54:02.033042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 00:54:02.033050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-19 00:54:02.033057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-19 00:54:02.033065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 00:54:02.033072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 00:54:02.033080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-19 00:54:02.033086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-19 00:54:02.033092 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.033098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 00:54:02.033104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-19 00:54:02.033111 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.033117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-19 00:54:02.033139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 00:54:02.033148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-19 00:54:02.033155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 00:54:02.033162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-19 00:54:02.033169 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.033176 | orchestrator | 2026-03-19 00:54:02.033202 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-19 00:54:02.033209 | orchestrator | Thursday 19 March 2026 00:50:24 +0000 (0:00:00.848) 0:02:30.248 ******** 2026-03-19 00:54:02.033215 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.033222 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.033228 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.033234 | orchestrator | 2026-03-19 00:54:02.033241 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-19 00:54:02.033248 | orchestrator | Thursday 19 March 2026 00:50:26 +0000 (0:00:01.517) 0:02:31.765 ******** 2026-03-19 00:54:02.033255 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.033262 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.033269 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.033275 | orchestrator | 2026-03-19 00:54:02.033282 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-19 00:54:02.033288 | orchestrator | Thursday 19 March 2026 00:50:28 +0000 (0:00:02.106) 0:02:33.871 ******** 2026-03-19 00:54:02.033294 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.033301 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.033308 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.033315 | orchestrator | 2026-03-19 00:54:02.033322 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-19 00:54:02.033328 | orchestrator | Thursday 19 March 2026 00:50:28 +0000 (0:00:00.298) 0:02:34.169 ******** 2026-03-19 00:54:02.033374 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.033411 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.033419 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.033426 | orchestrator | 2026-03-19 00:54:02.033434 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-19 00:54:02.033471 | orchestrator | Thursday 19 March 2026 00:50:28 +0000 (0:00:00.244) 0:02:34.413 ******** 2026-03-19 00:54:02.033478 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.033485 | orchestrator | 2026-03-19 00:54:02.033491 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-19 00:54:02.033514 | orchestrator | Thursday 19 March 2026 00:50:29 +0000 (0:00:01.006) 0:02:35.420 ******** 2026-03-19 00:54:02.033522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:54:02.033580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:54:02.033589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:54:02.033596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:54:02.033603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 00:54:02.033609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 00:54:02.033620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:54:02.033639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:54:02.033646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 00:54:02.033653 | orchestrator | 2026-03-19 00:54:02.033659 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-19 00:54:02.033666 | orchestrator | Thursday 19 March 2026 00:50:32 +0000 (0:00:02.927) 0:02:38.348 ******** 2026-03-19 00:54:02.033672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 00:54:02.033679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:54:02.033689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 00:54:02.033701 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.033727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 00:54:02.033735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:54:02.033742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 00:54:02.033748 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.033755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 00:54:02.033766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:54:02.033773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 00:54:02.033779 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.033786 | orchestrator | 2026-03-19 00:54:02.033793 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-19 00:54:02.033806 | orchestrator | Thursday 19 March 2026 00:50:33 +0000 (0:00:00.537) 0:02:38.885 ******** 2026-03-19 00:54:02.033817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-19 00:54:02.033824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-19 00:54:02.033831 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.033838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-19 00:54:02.033845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-19 00:54:02.033851 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.033858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-19 00:54:02.033865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-19 00:54:02.033871 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.033878 | orchestrator | 2026-03-19 00:54:02.033884 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-19 00:54:02.033910 | orchestrator | Thursday 19 March 2026 00:50:34 +0000 (0:00:00.843) 0:02:39.729 ******** 2026-03-19 00:54:02.033985 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.033993 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.033999 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.034006 | orchestrator | 2026-03-19 00:54:02.034067 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-19 00:54:02.034077 | orchestrator | Thursday 19 March 2026 00:50:35 +0000 (0:00:01.360) 0:02:41.090 ******** 2026-03-19 00:54:02.034084 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.034091 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.034098 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.034104 | orchestrator | 2026-03-19 00:54:02.034111 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-19 00:54:02.034118 | orchestrator | Thursday 19 March 2026 00:50:37 +0000 (0:00:02.029) 0:02:43.120 ******** 2026-03-19 00:54:02.034133 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.034141 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.034148 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.034155 | orchestrator | 2026-03-19 00:54:02.034162 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-19 00:54:02.034170 | orchestrator | Thursday 19 March 2026 00:50:37 +0000 (0:00:00.259) 0:02:43.379 ******** 2026-03-19 00:54:02.034177 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.034184 | orchestrator | 2026-03-19 00:54:02.034191 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-19 00:54:02.034198 | orchestrator | Thursday 19 March 2026 00:50:38 +0000 (0:00:01.028) 0:02:44.408 ******** 2026-03-19 00:54:02.034207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 00:54:02.034236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 00:54:02.034267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 00:54:02.034281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034289 | orchestrator | 2026-03-19 00:54:02.034296 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-19 00:54:02.034303 | orchestrator | Thursday 19 March 2026 00:50:42 +0000 (0:00:03.398) 0:02:47.806 ******** 2026-03-19 00:54:02.034320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 00:54:02.034328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034339 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.034347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 00:54:02.034354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034362 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.034378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 00:54:02.034389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034400 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.034408 | orchestrator | 2026-03-19 00:54:02.034415 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-19 00:54:02.034422 | orchestrator | Thursday 19 March 2026 00:50:42 +0000 (0:00:00.566) 0:02:48.373 ******** 2026-03-19 00:54:02.034430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-19 00:54:02.034438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-19 00:54:02.034445 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.034452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-19 00:54:02.034459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-19 00:54:02.034466 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.034473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-19 00:54:02.034480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-19 00:54:02.034488 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.034495 | orchestrator | 2026-03-19 00:54:02.034502 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-19 00:54:02.034509 | orchestrator | Thursday 19 March 2026 00:50:43 +0000 (0:00:00.961) 0:02:49.334 ******** 2026-03-19 00:54:02.034516 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.034523 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.034531 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.034538 | orchestrator | 2026-03-19 00:54:02.034545 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-19 00:54:02.034552 | orchestrator | Thursday 19 March 2026 00:50:44 +0000 (0:00:01.147) 0:02:50.482 ******** 2026-03-19 00:54:02.034560 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.034567 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.034574 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.034581 | orchestrator | 2026-03-19 00:54:02.034589 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-19 00:54:02.034596 | orchestrator | Thursday 19 March 2026 00:50:46 +0000 (0:00:01.918) 0:02:52.400 ******** 2026-03-19 00:54:02.034604 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.034621 | orchestrator | 2026-03-19 00:54:02.034629 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-19 00:54:02.034636 | orchestrator | Thursday 19 March 2026 00:50:47 +0000 (0:00:00.964) 0:02:53.365 ******** 2026-03-19 00:54:02.034644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 00:54:02.034671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 00:54:02.034680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 00:54:02.034749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034771 | orchestrator | 2026-03-19 00:54:02.034778 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-19 00:54:02.034784 | orchestrator | Thursday 19 March 2026 00:50:51 +0000 (0:00:03.559) 0:02:56.924 ******** 2026-03-19 00:54:02.034800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-19 00:54:02.034819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034842 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.034849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-19 00:54:02.034858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034933 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.034941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-19 00:54:02.034947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.034970 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.034977 | orchestrator | 2026-03-19 00:54:02.034984 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-19 00:54:02.034991 | orchestrator | Thursday 19 March 2026 00:50:51 +0000 (0:00:00.611) 0:02:57.535 ******** 2026-03-19 00:54:02.034998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-19 00:54:02.035016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-19 00:54:02.035023 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.035030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-19 00:54:02.035043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-19 00:54:02.035050 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.035057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-19 00:54:02.035064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-19 00:54:02.035070 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.035077 | orchestrator | 2026-03-19 00:54:02.035084 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-19 00:54:02.035090 | orchestrator | Thursday 19 March 2026 00:50:52 +0000 (0:00:00.791) 0:02:58.327 ******** 2026-03-19 00:54:02.035097 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.035104 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.035110 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.035117 | orchestrator | 2026-03-19 00:54:02.035123 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-19 00:54:02.035130 | orchestrator | Thursday 19 March 2026 00:50:53 +0000 (0:00:01.306) 0:02:59.634 ******** 2026-03-19 00:54:02.035136 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.035153 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.035160 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.035167 | orchestrator | 2026-03-19 00:54:02.035173 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-19 00:54:02.035180 | orchestrator | Thursday 19 March 2026 00:50:55 +0000 (0:00:01.942) 0:03:01.576 ******** 2026-03-19 00:54:02.035187 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.035194 | orchestrator | 2026-03-19 00:54:02.035200 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-19 00:54:02.035207 | orchestrator | Thursday 19 March 2026 00:50:57 +0000 (0:00:01.156) 0:03:02.733 ******** 2026-03-19 00:54:02.035214 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 00:54:02.035220 | orchestrator | 2026-03-19 00:54:02.035227 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-19 00:54:02.035233 | orchestrator | Thursday 19 March 2026 00:50:59 +0000 (0:00:02.858) 0:03:05.591 ******** 2026-03-19 00:54:02.035240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 00:54:02.035268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 00:54:02.035277 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.035285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 00:54:02.035297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 00:54:02.035304 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.035318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 00:54:02.035325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 00:54:02.035332 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.035339 | orchestrator | 2026-03-19 00:54:02.035346 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-19 00:54:02.035352 | orchestrator | Thursday 19 March 2026 00:51:02 +0000 (0:00:02.096) 0:03:07.688 ******** 2026-03-19 00:54:02.035379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 00:54:02.035394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 00:54:02.035402 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.035423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 00:54:02.035431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 00:54:02.035444 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.035452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 00:54:02.035470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 00:54:02.035479 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.035485 | orchestrator | 2026-03-19 00:54:02.035492 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-19 00:54:02.035498 | orchestrator | Thursday 19 March 2026 00:51:04 +0000 (0:00:02.518) 0:03:10.207 ******** 2026-03-19 00:54:02.035504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 00:54:02.035511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 00:54:02.035523 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.035529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 00:54:02.035536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 00:54:02.035542 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.035549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 00:54:02.035566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 00:54:02.035573 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.035580 | orchestrator | 2026-03-19 00:54:02.035586 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-19 00:54:02.035592 | orchestrator | Thursday 19 March 2026 00:51:06 +0000 (0:00:02.109) 0:03:12.316 ******** 2026-03-19 00:54:02.035599 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.035606 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.035612 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.035619 | orchestrator | 2026-03-19 00:54:02.035625 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-19 00:54:02.035631 | orchestrator | Thursday 19 March 2026 00:51:08 +0000 (0:00:01.935) 0:03:14.251 ******** 2026-03-19 00:54:02.035638 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.035654 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.035662 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.035678 | orchestrator | 2026-03-19 00:54:02.035684 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-19 00:54:02.035691 | orchestrator | Thursday 19 March 2026 00:51:09 +0000 (0:00:01.437) 0:03:15.689 ******** 2026-03-19 00:54:02.035698 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.035705 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.035712 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.035730 | orchestrator | 2026-03-19 00:54:02.035738 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-19 00:54:02.035744 | orchestrator | Thursday 19 March 2026 00:51:10 +0000 (0:00:00.320) 0:03:16.009 ******** 2026-03-19 00:54:02.035750 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.035766 | orchestrator | 2026-03-19 00:54:02.035772 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-19 00:54:02.035779 | orchestrator | Thursday 19 March 2026 00:51:11 +0000 (0:00:01.353) 0:03:17.362 ******** 2026-03-19 00:54:02.035786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-19 00:54:02.035793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-19 00:54:02.035801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-19 00:54:02.035807 | orchestrator | 2026-03-19 00:54:02.035814 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-19 00:54:02.035821 | orchestrator | Thursday 19 March 2026 00:51:13 +0000 (0:00:01.488) 0:03:18.851 ******** 2026-03-19 00:54:02.035849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-19 00:54:02.035861 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.035868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-19 00:54:02.035875 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.035882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-19 00:54:02.035941 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.035949 | orchestrator | 2026-03-19 00:54:02.035956 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-19 00:54:02.035962 | orchestrator | Thursday 19 March 2026 00:51:13 +0000 (0:00:00.397) 0:03:19.249 ******** 2026-03-19 00:54:02.035969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-19 00:54:02.035976 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.035983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-19 00:54:02.035990 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.035998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-19 00:54:02.036005 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.036013 | orchestrator | 2026-03-19 00:54:02.036019 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-19 00:54:02.036026 | orchestrator | Thursday 19 March 2026 00:51:14 +0000 (0:00:00.936) 0:03:20.185 ******** 2026-03-19 00:54:02.036033 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.036041 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.036048 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.036056 | orchestrator | 2026-03-19 00:54:02.036063 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-19 00:54:02.036077 | orchestrator | Thursday 19 March 2026 00:51:14 +0000 (0:00:00.455) 0:03:20.640 ******** 2026-03-19 00:54:02.036084 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.036092 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.036098 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.036106 | orchestrator | 2026-03-19 00:54:02.036113 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-19 00:54:02.036119 | orchestrator | Thursday 19 March 2026 00:51:16 +0000 (0:00:01.236) 0:03:21.876 ******** 2026-03-19 00:54:02.036126 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.036133 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.036140 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.036147 | orchestrator | 2026-03-19 00:54:02.036154 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-19 00:54:02.036177 | orchestrator | Thursday 19 March 2026 00:51:16 +0000 (0:00:00.304) 0:03:22.181 ******** 2026-03-19 00:54:02.036185 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.036192 | orchestrator | 2026-03-19 00:54:02.036198 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-19 00:54:02.036204 | orchestrator | Thursday 19 March 2026 00:51:17 +0000 (0:00:01.422) 0:03:23.604 ******** 2026-03-19 00:54:02.036210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 00:54:02.036218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 00:54:02.036233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-19 00:54:02.036293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 00:54:02.036299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-19 00:54:02.036366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 00:54:02.036418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-19 00:54:02.036470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 00:54:02.036540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 00:54:02.036574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 00:54:02.036642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 00:54:02.036660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 00:54:02.036700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 00:54:02.036715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 00:54:02.036755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 00:54:02.036763 | orchestrator | 2026-03-19 00:54:02.036770 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-19 00:54:02.036783 | orchestrator | Thursday 19 March 2026 00:51:22 +0000 (0:00:04.237) 0:03:27.842 ******** 2026-03-19 00:54:02.036791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 00:54:02.036798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 00:54:02.036810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-19 00:54:02.036881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-19 00:54:02.036921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.036968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.036987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 00:54:02.036994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.037006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.037013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 00:54:02.037020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-19 00:54:02.037027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.037049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.037060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-19 00:54:02.037066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.037077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.037084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.037091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 00:54:02.037107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 00:54:02.037113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 00:54:02.037124 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.037130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 00:54:02.037137 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.037145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 00:54:02.037152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.037158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.037177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.037191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-19 00:54:02.037199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.037206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.037222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.037230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.037249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 00:54:02.037261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.037269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-19 00:54:02.037276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 00:54:02.037283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.037289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 00:54:02.037307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 00:54:02.037327 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.037334 | orchestrator | 2026-03-19 00:54:02.037341 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-19 00:54:02.037356 | orchestrator | Thursday 19 March 2026 00:51:24 +0000 (0:00:01.856) 0:03:29.698 ******** 2026-03-19 00:54:02.037364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-19 00:54:02.037372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-19 00:54:02.037379 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.037385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-19 00:54:02.037391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-19 00:54:02.037397 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.037404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-19 00:54:02.037411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-19 00:54:02.037418 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.037425 | orchestrator | 2026-03-19 00:54:02.037431 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-19 00:54:02.037438 | orchestrator | Thursday 19 March 2026 00:51:25 +0000 (0:00:01.387) 0:03:31.086 ******** 2026-03-19 00:54:02.037445 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.037452 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.037458 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.037474 | orchestrator | 2026-03-19 00:54:02.037482 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-19 00:54:02.037489 | orchestrator | Thursday 19 March 2026 00:51:26 +0000 (0:00:01.285) 0:03:32.371 ******** 2026-03-19 00:54:02.037495 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.037502 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.037508 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.037515 | orchestrator | 2026-03-19 00:54:02.037522 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-19 00:54:02.037529 | orchestrator | Thursday 19 March 2026 00:51:28 +0000 (0:00:01.998) 0:03:34.370 ******** 2026-03-19 00:54:02.037535 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.037542 | orchestrator | 2026-03-19 00:54:02.037549 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-19 00:54:02.037556 | orchestrator | Thursday 19 March 2026 00:51:30 +0000 (0:00:01.449) 0:03:35.819 ******** 2026-03-19 00:54:02.037563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.037590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.037598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.037606 | orchestrator | 2026-03-19 00:54:02.037612 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-19 00:54:02.037619 | orchestrator | Thursday 19 March 2026 00:51:33 +0000 (0:00:03.005) 0:03:38.824 ******** 2026-03-19 00:54:02.037626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.037632 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.037639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.037649 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.037668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.037676 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.037683 | orchestrator | 2026-03-19 00:54:02.037690 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-19 00:54:02.037697 | orchestrator | Thursday 19 March 2026 00:51:33 +0000 (0:00:00.443) 0:03:39.268 ******** 2026-03-19 00:54:02.037704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-19 00:54:02.037711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-19 00:54:02.037719 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.037726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-19 00:54:02.037732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-19 00:54:02.037739 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.037746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-19 00:54:02.037753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-19 00:54:02.037760 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.037767 | orchestrator | 2026-03-19 00:54:02.037773 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-19 00:54:02.037780 | orchestrator | Thursday 19 March 2026 00:51:34 +0000 (0:00:00.860) 0:03:40.129 ******** 2026-03-19 00:54:02.037787 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.037793 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.037800 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.037807 | orchestrator | 2026-03-19 00:54:02.037814 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-19 00:54:02.037821 | orchestrator | Thursday 19 March 2026 00:51:35 +0000 (0:00:01.197) 0:03:41.327 ******** 2026-03-19 00:54:02.037827 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.037833 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.037846 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.037852 | orchestrator | 2026-03-19 00:54:02.037859 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-19 00:54:02.037866 | orchestrator | Thursday 19 March 2026 00:51:37 +0000 (0:00:01.795) 0:03:43.122 ******** 2026-03-19 00:54:02.037872 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.037879 | orchestrator | 2026-03-19 00:54:02.037921 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-19 00:54:02.037929 | orchestrator | Thursday 19 March 2026 00:51:38 +0000 (0:00:01.186) 0:03:44.308 ******** 2026-03-19 00:54:02.037937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.037962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.037970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.037977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.037990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.037998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.038044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.038056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.038066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.038077 | orchestrator | 2026-03-19 00:54:02.038085 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-19 00:54:02.038093 | orchestrator | Thursday 19 March 2026 00:51:43 +0000 (0:00:04.426) 0:03:48.735 ******** 2026-03-19 00:54:02.038103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.038112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.038132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.038139 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.038146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.038155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.038161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.038166 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.038173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.038191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.038198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.038204 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.038210 | orchestrator | 2026-03-19 00:54:02.038216 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-19 00:54:02.038222 | orchestrator | Thursday 19 March 2026 00:51:43 +0000 (0:00:00.557) 0:03:49.292 ******** 2026-03-19 00:54:02.038232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-19 00:54:02.038239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-19 00:54:02.038245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-19 00:54:02.038252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-19 00:54:02.038258 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.038265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-19 00:54:02.038271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-19 00:54:02.038277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-19 00:54:02.038284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-19 00:54:02.038290 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.038297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-19 00:54:02.038303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-19 00:54:02.038310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-19 00:54:02.038317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-19 00:54:02.038323 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.038330 | orchestrator | 2026-03-19 00:54:02.038337 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-19 00:54:02.038355 | orchestrator | Thursday 19 March 2026 00:51:44 +0000 (0:00:00.819) 0:03:50.112 ******** 2026-03-19 00:54:02.038362 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.038369 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.038376 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.038382 | orchestrator | 2026-03-19 00:54:02.038389 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-19 00:54:02.038396 | orchestrator | Thursday 19 March 2026 00:51:46 +0000 (0:00:01.658) 0:03:51.771 ******** 2026-03-19 00:54:02.038402 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.038409 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.038415 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.038424 | orchestrator | 2026-03-19 00:54:02.038431 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-19 00:54:02.038437 | orchestrator | Thursday 19 March 2026 00:51:48 +0000 (0:00:02.086) 0:03:53.857 ******** 2026-03-19 00:54:02.038444 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.038450 | orchestrator | 2026-03-19 00:54:02.038456 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-19 00:54:02.038462 | orchestrator | Thursday 19 March 2026 00:51:49 +0000 (0:00:01.201) 0:03:55.059 ******** 2026-03-19 00:54:02.038468 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-19 00:54:02.038476 | orchestrator | 2026-03-19 00:54:02.038482 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-19 00:54:02.038488 | orchestrator | Thursday 19 March 2026 00:51:50 +0000 (0:00:01.136) 0:03:56.195 ******** 2026-03-19 00:54:02.038495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-19 00:54:02.038503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-19 00:54:02.038509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-19 00:54:02.038516 | orchestrator | 2026-03-19 00:54:02.038522 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-19 00:54:02.038529 | orchestrator | Thursday 19 March 2026 00:51:54 +0000 (0:00:03.494) 0:03:59.690 ******** 2026-03-19 00:54:02.038536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 00:54:02.038553 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.038562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 00:54:02.038581 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.038602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 00:54:02.038610 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.038616 | orchestrator | 2026-03-19 00:54:02.038623 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-19 00:54:02.038629 | orchestrator | Thursday 19 March 2026 00:51:55 +0000 (0:00:01.096) 0:04:00.787 ******** 2026-03-19 00:54:02.038636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 00:54:02.038644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 00:54:02.038651 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.038658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 00:54:02.038665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 00:54:02.038672 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.038679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 00:54:02.038686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 00:54:02.038692 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.038698 | orchestrator | 2026-03-19 00:54:02.038704 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-19 00:54:02.038711 | orchestrator | Thursday 19 March 2026 00:51:56 +0000 (0:00:01.581) 0:04:02.368 ******** 2026-03-19 00:54:02.038717 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.038723 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.038730 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.038737 | orchestrator | 2026-03-19 00:54:02.038744 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-19 00:54:02.038751 | orchestrator | Thursday 19 March 2026 00:51:58 +0000 (0:00:02.077) 0:04:04.445 ******** 2026-03-19 00:54:02.038758 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.038765 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.038771 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.038778 | orchestrator | 2026-03-19 00:54:02.038784 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-19 00:54:02.038791 | orchestrator | Thursday 19 March 2026 00:52:01 +0000 (0:00:02.760) 0:04:07.206 ******** 2026-03-19 00:54:02.038798 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-19 00:54:02.038812 | orchestrator | 2026-03-19 00:54:02.038819 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-19 00:54:02.038826 | orchestrator | Thursday 19 March 2026 00:52:02 +0000 (0:00:00.750) 0:04:07.956 ******** 2026-03-19 00:54:02.038833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 00:54:02.038840 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.038863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 00:54:02.038872 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.038878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 00:54:02.038897 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.038907 | orchestrator | 2026-03-19 00:54:02.038916 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-19 00:54:02.038923 | orchestrator | Thursday 19 March 2026 00:52:03 +0000 (0:00:01.157) 0:04:09.114 ******** 2026-03-19 00:54:02.038929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 00:54:02.038937 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.038943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 00:54:02.038949 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.038956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 00:54:02.038967 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.038973 | orchestrator | 2026-03-19 00:54:02.038979 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-19 00:54:02.038985 | orchestrator | Thursday 19 March 2026 00:52:04 +0000 (0:00:01.296) 0:04:10.411 ******** 2026-03-19 00:54:02.038991 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.038997 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.039003 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.039009 | orchestrator | 2026-03-19 00:54:02.039016 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-19 00:54:02.039022 | orchestrator | Thursday 19 March 2026 00:52:05 +0000 (0:00:01.145) 0:04:11.556 ******** 2026-03-19 00:54:02.039028 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.039036 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.039043 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.039049 | orchestrator | 2026-03-19 00:54:02.039056 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-19 00:54:02.039062 | orchestrator | Thursday 19 March 2026 00:52:08 +0000 (0:00:02.322) 0:04:13.879 ******** 2026-03-19 00:54:02.039069 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.039075 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.039082 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.039088 | orchestrator | 2026-03-19 00:54:02.039094 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-19 00:54:02.039101 | orchestrator | Thursday 19 March 2026 00:52:10 +0000 (0:00:02.745) 0:04:16.625 ******** 2026-03-19 00:54:02.039108 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-19 00:54:02.039114 | orchestrator | 2026-03-19 00:54:02.039120 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-19 00:54:02.039126 | orchestrator | Thursday 19 March 2026 00:52:11 +0000 (0:00:00.789) 0:04:17.414 ******** 2026-03-19 00:54:02.039149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 00:54:02.039156 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.039163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 00:54:02.039169 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.039175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 00:54:02.039186 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.039192 | orchestrator | 2026-03-19 00:54:02.039199 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-19 00:54:02.039206 | orchestrator | Thursday 19 March 2026 00:52:12 +0000 (0:00:01.105) 0:04:18.519 ******** 2026-03-19 00:54:02.039212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 00:54:02.039219 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.039225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 00:54:02.039231 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.039237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 00:54:02.039244 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.039250 | orchestrator | 2026-03-19 00:54:02.039256 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-19 00:54:02.039263 | orchestrator | Thursday 19 March 2026 00:52:14 +0000 (0:00:01.216) 0:04:19.736 ******** 2026-03-19 00:54:02.039269 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.039275 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.039281 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.039287 | orchestrator | 2026-03-19 00:54:02.039294 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-19 00:54:02.039300 | orchestrator | Thursday 19 March 2026 00:52:15 +0000 (0:00:01.539) 0:04:21.275 ******** 2026-03-19 00:54:02.039306 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.039322 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.039329 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.039335 | orchestrator | 2026-03-19 00:54:02.039346 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-19 00:54:02.039353 | orchestrator | Thursday 19 March 2026 00:52:18 +0000 (0:00:02.796) 0:04:24.072 ******** 2026-03-19 00:54:02.039359 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.039365 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.039371 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.039377 | orchestrator | 2026-03-19 00:54:02.039384 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-19 00:54:02.039390 | orchestrator | Thursday 19 March 2026 00:52:21 +0000 (0:00:03.200) 0:04:27.272 ******** 2026-03-19 00:54:02.039396 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.039402 | orchestrator | 2026-03-19 00:54:02.039409 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-19 00:54:02.039415 | orchestrator | Thursday 19 March 2026 00:52:22 +0000 (0:00:01.287) 0:04:28.560 ******** 2026-03-19 00:54:02.039428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.039435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 00:54:02.039467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.039475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.039482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.039500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.039512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 00:54:02.039520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.039527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.039533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.039540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.039557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 00:54:02.039567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.039574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.039581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.039587 | orchestrator | 2026-03-19 00:54:02.039594 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-19 00:54:02.039601 | orchestrator | Thursday 19 March 2026 00:52:26 +0000 (0:00:03.930) 0:04:32.490 ******** 2026-03-19 00:54:02.039608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.039614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 00:54:02.039632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.039643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.039649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.039656 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.039663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.039670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 00:54:02.039676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.039702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.039716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.039729 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.039737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.039743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 00:54:02.039750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.039756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 00:54:02.039763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 00:54:02.039783 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.039791 | orchestrator | 2026-03-19 00:54:02.039801 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-19 00:54:02.039808 | orchestrator | Thursday 19 March 2026 00:52:27 +0000 (0:00:01.043) 0:04:33.534 ******** 2026-03-19 00:54:02.039815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 00:54:02.039822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 00:54:02.039829 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.039835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 00:54:02.039841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 00:54:02.039848 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.039855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 00:54:02.039861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 00:54:02.039868 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.039875 | orchestrator | 2026-03-19 00:54:02.039882 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-19 00:54:02.039935 | orchestrator | Thursday 19 March 2026 00:52:28 +0000 (0:00:00.905) 0:04:34.439 ******** 2026-03-19 00:54:02.039943 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.039949 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.039955 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.039961 | orchestrator | 2026-03-19 00:54:02.039967 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-19 00:54:02.039973 | orchestrator | Thursday 19 March 2026 00:52:30 +0000 (0:00:01.505) 0:04:35.944 ******** 2026-03-19 00:54:02.039980 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.039986 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.039992 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.039999 | orchestrator | 2026-03-19 00:54:02.040005 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-19 00:54:02.040011 | orchestrator | Thursday 19 March 2026 00:52:32 +0000 (0:00:02.292) 0:04:38.237 ******** 2026-03-19 00:54:02.040018 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.040024 | orchestrator | 2026-03-19 00:54:02.040030 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-19 00:54:02.040037 | orchestrator | Thursday 19 March 2026 00:52:34 +0000 (0:00:01.647) 0:04:39.884 ******** 2026-03-19 00:54:02.040044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 00:54:02.040074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 00:54:02.040081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 00:54:02.040089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 00:54:02.040097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 00:54:02.040120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 00:54:02.040128 | orchestrator | 2026-03-19 00:54:02.040135 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-19 00:54:02.040142 | orchestrator | Thursday 19 March 2026 00:52:39 +0000 (0:00:05.348) 0:04:45.233 ******** 2026-03-19 00:54:02.040224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 00:54:02.040243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 00:54:02.040251 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.040259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 00:54:02.040272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 00:54:02.040288 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.040309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 00:54:02.040318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 00:54:02.040324 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.040330 | orchestrator | 2026-03-19 00:54:02.040337 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-19 00:54:02.040343 | orchestrator | Thursday 19 March 2026 00:52:40 +0000 (0:00:01.048) 0:04:46.282 ******** 2026-03-19 00:54:02.040349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-19 00:54:02.040360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-19 00:54:02.040368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-19 00:54:02.040374 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.040381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-19 00:54:02.040387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-19 00:54:02.040394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-19 00:54:02.040400 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.040406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-19 00:54:02.040413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-19 00:54:02.040432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-19 00:54:02.040440 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.040446 | orchestrator | 2026-03-19 00:54:02.040452 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-19 00:54:02.040459 | orchestrator | Thursday 19 March 2026 00:52:41 +0000 (0:00:01.401) 0:04:47.683 ******** 2026-03-19 00:54:02.040465 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.040471 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.040478 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.040484 | orchestrator | 2026-03-19 00:54:02.040492 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-19 00:54:02.040498 | orchestrator | Thursday 19 March 2026 00:52:42 +0000 (0:00:00.480) 0:04:48.164 ******** 2026-03-19 00:54:02.040505 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.040512 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.040519 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.040526 | orchestrator | 2026-03-19 00:54:02.040533 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-19 00:54:02.040541 | orchestrator | Thursday 19 March 2026 00:52:43 +0000 (0:00:01.318) 0:04:49.483 ******** 2026-03-19 00:54:02.040548 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.040563 | orchestrator | 2026-03-19 00:54:02.040570 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-19 00:54:02.040577 | orchestrator | Thursday 19 March 2026 00:52:45 +0000 (0:00:01.642) 0:04:51.125 ******** 2026-03-19 00:54:02.040585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-19 00:54:02.040598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 00:54:02.040607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.040615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.040622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 00:54:02.040644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-19 00:54:02.040651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 00:54:02.040664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.040672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-19 00:54:02.040679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.040686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 00:54:02.040694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 00:54:02.040713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.040720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.040732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 00:54:02.040740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-19 00:54:02.040749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-19 00:54:02.040762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-19 00:54:02.040770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.040781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-19 00:54:02.040789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.040796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 00:54:02.040803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.040810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-19 00:54:02.040826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.040834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-19 00:54:02.040845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 00:54:02.040852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.040859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.040866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 00:54:02.040872 | orchestrator | 2026-03-19 00:54:02.040879 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-19 00:54:02.040901 | orchestrator | Thursday 19 March 2026 00:52:49 +0000 (0:00:04.284) 0:04:55.410 ******** 2026-03-19 00:54:02.040922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-19 00:54:02.040929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 00:54:02.040941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.040947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.040954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-19 00:54:02.040960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 00:54:02.040967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 00:54:02.040981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-19 00:54:02.040996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.041004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-19 00:54:02.041011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.041018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.041025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 00:54:02.041033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.041047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-19 00:54:02.041059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 00:54:02.041066 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.041073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-19 00:54:02.041079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.041086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-19 00:54:02.041093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.041111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 00:54:02.041118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 00:54:02.041124 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.041132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.041139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.041146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 00:54:02.041152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-19 00:54:02.041165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-19 00:54:02.041180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.041188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 00:54:02.041196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 00:54:02.041203 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.041210 | orchestrator | 2026-03-19 00:54:02.041217 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-19 00:54:02.041223 | orchestrator | Thursday 19 March 2026 00:52:50 +0000 (0:00:00.900) 0:04:56.311 ******** 2026-03-19 00:54:02.041230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-19 00:54:02.041237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-19 00:54:02.041244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-19 00:54:02.041251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-19 00:54:02.041258 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.041264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-19 00:54:02.041271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-19 00:54:02.041283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-19 00:54:02.041290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-19 00:54:02.041297 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.041307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-19 00:54:02.041324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-19 00:54:02.041331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-19 00:54:02.041338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-19 00:54:02.041345 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.041352 | orchestrator | 2026-03-19 00:54:02.041359 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-19 00:54:02.041366 | orchestrator | Thursday 19 March 2026 00:52:51 +0000 (0:00:01.262) 0:04:57.573 ******** 2026-03-19 00:54:02.041373 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.041380 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.041387 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.041394 | orchestrator | 2026-03-19 00:54:02.041401 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-19 00:54:02.041408 | orchestrator | Thursday 19 March 2026 00:52:52 +0000 (0:00:00.465) 0:04:58.039 ******** 2026-03-19 00:54:02.041414 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.041421 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.041427 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.041434 | orchestrator | 2026-03-19 00:54:02.041441 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-19 00:54:02.041448 | orchestrator | Thursday 19 March 2026 00:52:53 +0000 (0:00:01.323) 0:04:59.362 ******** 2026-03-19 00:54:02.041455 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.041461 | orchestrator | 2026-03-19 00:54:02.041468 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-19 00:54:02.041474 | orchestrator | Thursday 19 March 2026 00:52:55 +0000 (0:00:01.415) 0:05:00.778 ******** 2026-03-19 00:54:02.041481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 00:54:02.041495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 00:54:02.041511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 00:54:02.041529 | orchestrator | 2026-03-19 00:54:02.041536 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-19 00:54:02.041542 | orchestrator | Thursday 19 March 2026 00:52:57 +0000 (0:00:02.597) 0:05:03.376 ******** 2026-03-19 00:54:02.041549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 00:54:02.041556 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.041563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 00:54:02.041575 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.041582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 00:54:02.041589 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.041596 | orchestrator | 2026-03-19 00:54:02.041603 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-19 00:54:02.041609 | orchestrator | Thursday 19 March 2026 00:52:58 +0000 (0:00:00.360) 0:05:03.737 ******** 2026-03-19 00:54:02.041624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-19 00:54:02.041632 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.041638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-19 00:54:02.041644 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.041651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-19 00:54:02.041658 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.041664 | orchestrator | 2026-03-19 00:54:02.041670 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-19 00:54:02.041677 | orchestrator | Thursday 19 March 2026 00:52:58 +0000 (0:00:00.574) 0:05:04.311 ******** 2026-03-19 00:54:02.041683 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.041689 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.041695 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.041702 | orchestrator | 2026-03-19 00:54:02.041709 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-19 00:54:02.041715 | orchestrator | Thursday 19 March 2026 00:52:59 +0000 (0:00:00.652) 0:05:04.963 ******** 2026-03-19 00:54:02.041721 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.041727 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.041733 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.041740 | orchestrator | 2026-03-19 00:54:02.041746 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-19 00:54:02.041752 | orchestrator | Thursday 19 March 2026 00:53:00 +0000 (0:00:01.150) 0:05:06.114 ******** 2026-03-19 00:54:02.041764 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:54:02.041770 | orchestrator | 2026-03-19 00:54:02.041776 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-19 00:54:02.041782 | orchestrator | Thursday 19 March 2026 00:53:01 +0000 (0:00:01.376) 0:05:07.490 ******** 2026-03-19 00:54:02.041789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.041797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.041813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.041822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.041835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.041842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 00:54:02.041849 | orchestrator | 2026-03-19 00:54:02.041856 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-19 00:54:02.041862 | orchestrator | Thursday 19 March 2026 00:53:07 +0000 (0:00:05.663) 0:05:13.153 ******** 2026-03-19 00:54:02.041873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.041925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.041939 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.041946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.041952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.041959 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.041965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.041980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-19 00:54:02.041988 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.041994 | orchestrator | 2026-03-19 00:54:02.042006 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-19 00:54:02.042046 | orchestrator | Thursday 19 March 2026 00:53:08 +0000 (0:00:00.999) 0:05:14.153 ******** 2026-03-19 00:54:02.042059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-19 00:54:02.042067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-19 00:54:02.042074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-19 00:54:02.042082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-19 00:54:02.042089 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.042103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-19 00:54:02.042110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-19 00:54:02.042117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-19 00:54:02.042124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-19 00:54:02.042131 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.042138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-19 00:54:02.042145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-19 00:54:02.042152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-19 00:54:02.042159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-19 00:54:02.042166 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.042173 | orchestrator | 2026-03-19 00:54:02.042187 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-19 00:54:02.042194 | orchestrator | Thursday 19 March 2026 00:53:09 +0000 (0:00:00.936) 0:05:15.089 ******** 2026-03-19 00:54:02.042199 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.042206 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.042212 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.042219 | orchestrator | 2026-03-19 00:54:02.042225 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-19 00:54:02.042232 | orchestrator | Thursday 19 March 2026 00:53:10 +0000 (0:00:01.373) 0:05:16.463 ******** 2026-03-19 00:54:02.042250 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.042260 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.042267 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.042273 | orchestrator | 2026-03-19 00:54:02.042279 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-19 00:54:02.042286 | orchestrator | Thursday 19 March 2026 00:53:13 +0000 (0:00:02.227) 0:05:18.691 ******** 2026-03-19 00:54:02.042292 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.042298 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.042305 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.042311 | orchestrator | 2026-03-19 00:54:02.042318 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-19 00:54:02.042324 | orchestrator | Thursday 19 March 2026 00:53:13 +0000 (0:00:00.613) 0:05:19.305 ******** 2026-03-19 00:54:02.042331 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.042337 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.042343 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.042350 | orchestrator | 2026-03-19 00:54:02.042356 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-19 00:54:02.042363 | orchestrator | Thursday 19 March 2026 00:53:13 +0000 (0:00:00.318) 0:05:19.623 ******** 2026-03-19 00:54:02.042368 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.042374 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.042381 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.042387 | orchestrator | 2026-03-19 00:54:02.042393 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-19 00:54:02.042399 | orchestrator | Thursday 19 March 2026 00:53:14 +0000 (0:00:00.313) 0:05:19.937 ******** 2026-03-19 00:54:02.042406 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.042412 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.042418 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.042424 | orchestrator | 2026-03-19 00:54:02.042431 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-19 00:54:02.042437 | orchestrator | Thursday 19 March 2026 00:53:14 +0000 (0:00:00.289) 0:05:20.227 ******** 2026-03-19 00:54:02.042443 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.042450 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.042456 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.042462 | orchestrator | 2026-03-19 00:54:02.042469 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-19 00:54:02.042476 | orchestrator | Thursday 19 March 2026 00:53:15 +0000 (0:00:00.600) 0:05:20.828 ******** 2026-03-19 00:54:02.042483 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.042489 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.042495 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.042502 | orchestrator | 2026-03-19 00:54:02.042508 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-19 00:54:02.042514 | orchestrator | Thursday 19 March 2026 00:53:15 +0000 (0:00:00.521) 0:05:21.349 ******** 2026-03-19 00:54:02.042520 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.042527 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.042534 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.042540 | orchestrator | 2026-03-19 00:54:02.042547 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-19 00:54:02.042553 | orchestrator | Thursday 19 March 2026 00:53:16 +0000 (0:00:00.726) 0:05:22.076 ******** 2026-03-19 00:54:02.042559 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.042566 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.042572 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.042578 | orchestrator | 2026-03-19 00:54:02.042584 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-19 00:54:02.042591 | orchestrator | Thursday 19 March 2026 00:53:17 +0000 (0:00:00.638) 0:05:22.715 ******** 2026-03-19 00:54:02.042602 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.042609 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.042616 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.042623 | orchestrator | 2026-03-19 00:54:02.042630 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-19 00:54:02.042644 | orchestrator | Thursday 19 March 2026 00:53:17 +0000 (0:00:00.954) 0:05:23.670 ******** 2026-03-19 00:54:02.042651 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.042657 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.042663 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.042670 | orchestrator | 2026-03-19 00:54:02.042677 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-19 00:54:02.042684 | orchestrator | Thursday 19 March 2026 00:53:18 +0000 (0:00:00.889) 0:05:24.560 ******** 2026-03-19 00:54:02.042690 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.042698 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.042704 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.042711 | orchestrator | 2026-03-19 00:54:02.042718 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-19 00:54:02.042732 | orchestrator | Thursday 19 March 2026 00:53:19 +0000 (0:00:00.905) 0:05:25.465 ******** 2026-03-19 00:54:02.042740 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.042747 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.042753 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.042759 | orchestrator | 2026-03-19 00:54:02.042765 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-19 00:54:02.042773 | orchestrator | Thursday 19 March 2026 00:53:29 +0000 (0:00:09.447) 0:05:34.913 ******** 2026-03-19 00:54:02.042779 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.042786 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.042792 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.042799 | orchestrator | 2026-03-19 00:54:02.042806 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-19 00:54:02.042812 | orchestrator | Thursday 19 March 2026 00:53:30 +0000 (0:00:01.166) 0:05:36.079 ******** 2026-03-19 00:54:02.042818 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.042824 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.042831 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.042837 | orchestrator | 2026-03-19 00:54:02.042843 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-19 00:54:02.042849 | orchestrator | Thursday 19 March 2026 00:53:43 +0000 (0:00:12.866) 0:05:48.946 ******** 2026-03-19 00:54:02.042856 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.042868 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.042875 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.042900 | orchestrator | 2026-03-19 00:54:02.042912 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-19 00:54:02.042919 | orchestrator | Thursday 19 March 2026 00:53:44 +0000 (0:00:00.820) 0:05:49.767 ******** 2026-03-19 00:54:02.042926 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:54:02.042933 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:54:02.042940 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:54:02.042946 | orchestrator | 2026-03-19 00:54:02.042952 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-19 00:54:02.042958 | orchestrator | Thursday 19 March 2026 00:53:53 +0000 (0:00:09.535) 0:05:59.302 ******** 2026-03-19 00:54:02.042964 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.042971 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.042978 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.042984 | orchestrator | 2026-03-19 00:54:02.042991 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-19 00:54:02.042998 | orchestrator | Thursday 19 March 2026 00:53:54 +0000 (0:00:00.761) 0:06:00.064 ******** 2026-03-19 00:54:02.043005 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.043011 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.043024 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.043031 | orchestrator | 2026-03-19 00:54:02.043038 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-19 00:54:02.043044 | orchestrator | Thursday 19 March 2026 00:53:54 +0000 (0:00:00.344) 0:06:00.408 ******** 2026-03-19 00:54:02.043051 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.043059 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.043066 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.043072 | orchestrator | 2026-03-19 00:54:02.043087 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-19 00:54:02.043095 | orchestrator | Thursday 19 March 2026 00:53:55 +0000 (0:00:00.343) 0:06:00.751 ******** 2026-03-19 00:54:02.043101 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.043108 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.043114 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.043120 | orchestrator | 2026-03-19 00:54:02.043127 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-19 00:54:02.043133 | orchestrator | Thursday 19 March 2026 00:53:55 +0000 (0:00:00.355) 0:06:01.107 ******** 2026-03-19 00:54:02.043139 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.043146 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.043152 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.043158 | orchestrator | 2026-03-19 00:54:02.043165 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-19 00:54:02.043171 | orchestrator | Thursday 19 March 2026 00:53:56 +0000 (0:00:00.682) 0:06:01.789 ******** 2026-03-19 00:54:02.043177 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:54:02.043184 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:54:02.043190 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:54:02.043196 | orchestrator | 2026-03-19 00:54:02.043203 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-19 00:54:02.043209 | orchestrator | Thursday 19 March 2026 00:53:56 +0000 (0:00:00.403) 0:06:02.192 ******** 2026-03-19 00:54:02.043216 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.043223 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.043229 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.043236 | orchestrator | 2026-03-19 00:54:02.043242 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-19 00:54:02.043248 | orchestrator | Thursday 19 March 2026 00:53:57 +0000 (0:00:00.817) 0:06:03.009 ******** 2026-03-19 00:54:02.043254 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:54:02.043261 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:54:02.043267 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:54:02.043273 | orchestrator | 2026-03-19 00:54:02.043280 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:54:02.043286 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-19 00:54:02.043293 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-19 00:54:02.043300 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-19 00:54:02.043307 | orchestrator | 2026-03-19 00:54:02.043313 | orchestrator | 2026-03-19 00:54:02.043319 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:54:02.043325 | orchestrator | Thursday 19 March 2026 00:53:58 +0000 (0:00:00.812) 0:06:03.822 ******** 2026-03-19 00:54:02.043331 | orchestrator | =============================================================================== 2026-03-19 00:54:02.043338 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.87s 2026-03-19 00:54:02.043345 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.54s 2026-03-19 00:54:02.043353 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.45s 2026-03-19 00:54:02.043368 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.66s 2026-03-19 00:54:02.043374 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.44s 2026-03-19 00:54:02.043380 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.35s 2026-03-19 00:54:02.043386 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.24s 2026-03-19 00:54:02.043392 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.77s 2026-03-19 00:54:02.043398 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.43s 2026-03-19 00:54:02.043411 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.28s 2026-03-19 00:54:02.043421 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.27s 2026-03-19 00:54:02.043427 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.24s 2026-03-19 00:54:02.043433 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.19s 2026-03-19 00:54:02.043439 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.93s 2026-03-19 00:54:02.043445 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.93s 2026-03-19 00:54:02.043452 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.89s 2026-03-19 00:54:02.043458 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 3.77s 2026-03-19 00:54:02.043465 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 3.77s 2026-03-19 00:54:02.043471 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.66s 2026-03-19 00:54:02.043478 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.56s 2026-03-19 00:54:02.043484 | orchestrator | 2026-03-19 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:05.076462 | orchestrator | 2026-03-19 00:54:05 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:05.081317 | orchestrator | 2026-03-19 00:54:05 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:05.083561 | orchestrator | 2026-03-19 00:54:05 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:05.083816 | orchestrator | 2026-03-19 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:08.129517 | orchestrator | 2026-03-19 00:54:08 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:08.130794 | orchestrator | 2026-03-19 00:54:08 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:08.132112 | orchestrator | 2026-03-19 00:54:08 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:08.132160 | orchestrator | 2026-03-19 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:11.160562 | orchestrator | 2026-03-19 00:54:11 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:11.161076 | orchestrator | 2026-03-19 00:54:11 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:11.161761 | orchestrator | 2026-03-19 00:54:11 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:11.162509 | orchestrator | 2026-03-19 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:14.186627 | orchestrator | 2026-03-19 00:54:14 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:14.187043 | orchestrator | 2026-03-19 00:54:14 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:14.187723 | orchestrator | 2026-03-19 00:54:14 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:14.187761 | orchestrator | 2026-03-19 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:17.236558 | orchestrator | 2026-03-19 00:54:17 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:17.239142 | orchestrator | 2026-03-19 00:54:17 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:17.240127 | orchestrator | 2026-03-19 00:54:17 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:17.240162 | orchestrator | 2026-03-19 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:20.276566 | orchestrator | 2026-03-19 00:54:20 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:20.277383 | orchestrator | 2026-03-19 00:54:20 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:20.277974 | orchestrator | 2026-03-19 00:54:20 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:20.278069 | orchestrator | 2026-03-19 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:23.320855 | orchestrator | 2026-03-19 00:54:23 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:23.321419 | orchestrator | 2026-03-19 00:54:23 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:23.326958 | orchestrator | 2026-03-19 00:54:23 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:23.327050 | orchestrator | 2026-03-19 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:26.390196 | orchestrator | 2026-03-19 00:54:26 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:26.391676 | orchestrator | 2026-03-19 00:54:26 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:26.394350 | orchestrator | 2026-03-19 00:54:26 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:26.394425 | orchestrator | 2026-03-19 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:29.441108 | orchestrator | 2026-03-19 00:54:29 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:29.441882 | orchestrator | 2026-03-19 00:54:29 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:29.443433 | orchestrator | 2026-03-19 00:54:29 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:29.443523 | orchestrator | 2026-03-19 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:32.484756 | orchestrator | 2026-03-19 00:54:32 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:32.486967 | orchestrator | 2026-03-19 00:54:32 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:32.488672 | orchestrator | 2026-03-19 00:54:32 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:32.489000 | orchestrator | 2026-03-19 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:35.513741 | orchestrator | 2026-03-19 00:54:35 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:35.514782 | orchestrator | 2026-03-19 00:54:35 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:35.515579 | orchestrator | 2026-03-19 00:54:35 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:35.516011 | orchestrator | 2026-03-19 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:38.550720 | orchestrator | 2026-03-19 00:54:38 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:38.552581 | orchestrator | 2026-03-19 00:54:38 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:38.555143 | orchestrator | 2026-03-19 00:54:38 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:38.555671 | orchestrator | 2026-03-19 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:41.588113 | orchestrator | 2026-03-19 00:54:41 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:41.591187 | orchestrator | 2026-03-19 00:54:41 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:41.593250 | orchestrator | 2026-03-19 00:54:41 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:41.593307 | orchestrator | 2026-03-19 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:44.637017 | orchestrator | 2026-03-19 00:54:44 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:44.639525 | orchestrator | 2026-03-19 00:54:44 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:44.643105 | orchestrator | 2026-03-19 00:54:44 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:44.643172 | orchestrator | 2026-03-19 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:47.690616 | orchestrator | 2026-03-19 00:54:47 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:47.694382 | orchestrator | 2026-03-19 00:54:47 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:47.696401 | orchestrator | 2026-03-19 00:54:47 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:47.696544 | orchestrator | 2026-03-19 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:50.747080 | orchestrator | 2026-03-19 00:54:50 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:50.748607 | orchestrator | 2026-03-19 00:54:50 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:50.750637 | orchestrator | 2026-03-19 00:54:50 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:50.750695 | orchestrator | 2026-03-19 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:53.794925 | orchestrator | 2026-03-19 00:54:53 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:53.797454 | orchestrator | 2026-03-19 00:54:53 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:53.800026 | orchestrator | 2026-03-19 00:54:53 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:53.800140 | orchestrator | 2026-03-19 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:56.842402 | orchestrator | 2026-03-19 00:54:56 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:56.844354 | orchestrator | 2026-03-19 00:54:56 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:56.846584 | orchestrator | 2026-03-19 00:54:56 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:56.846705 | orchestrator | 2026-03-19 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:54:59.885515 | orchestrator | 2026-03-19 00:54:59 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:54:59.887152 | orchestrator | 2026-03-19 00:54:59 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:54:59.888300 | orchestrator | 2026-03-19 00:54:59 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:54:59.889710 | orchestrator | 2026-03-19 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:02.927109 | orchestrator | 2026-03-19 00:55:02 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:02.927928 | orchestrator | 2026-03-19 00:55:02 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:02.930696 | orchestrator | 2026-03-19 00:55:02 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:02.930770 | orchestrator | 2026-03-19 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:05.975127 | orchestrator | 2026-03-19 00:55:05 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:05.977012 | orchestrator | 2026-03-19 00:55:05 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:05.978446 | orchestrator | 2026-03-19 00:55:05 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:05.978480 | orchestrator | 2026-03-19 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:09.021685 | orchestrator | 2026-03-19 00:55:09 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:09.024971 | orchestrator | 2026-03-19 00:55:09 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:09.027226 | orchestrator | 2026-03-19 00:55:09 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:09.027297 | orchestrator | 2026-03-19 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:12.070991 | orchestrator | 2026-03-19 00:55:12 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:12.072351 | orchestrator | 2026-03-19 00:55:12 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:12.074576 | orchestrator | 2026-03-19 00:55:12 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:12.074745 | orchestrator | 2026-03-19 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:15.117104 | orchestrator | 2026-03-19 00:55:15 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:15.117859 | orchestrator | 2026-03-19 00:55:15 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:15.118667 | orchestrator | 2026-03-19 00:55:15 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:15.118715 | orchestrator | 2026-03-19 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:18.151305 | orchestrator | 2026-03-19 00:55:18 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:18.151572 | orchestrator | 2026-03-19 00:55:18 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:18.152109 | orchestrator | 2026-03-19 00:55:18 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:18.152132 | orchestrator | 2026-03-19 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:21.192086 | orchestrator | 2026-03-19 00:55:21 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:21.195441 | orchestrator | 2026-03-19 00:55:21 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:21.197706 | orchestrator | 2026-03-19 00:55:21 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:21.198224 | orchestrator | 2026-03-19 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:24.253445 | orchestrator | 2026-03-19 00:55:24 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:24.254719 | orchestrator | 2026-03-19 00:55:24 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:24.255992 | orchestrator | 2026-03-19 00:55:24 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:24.256018 | orchestrator | 2026-03-19 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:27.307905 | orchestrator | 2026-03-19 00:55:27 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:27.309501 | orchestrator | 2026-03-19 00:55:27 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:27.310720 | orchestrator | 2026-03-19 00:55:27 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:27.310777 | orchestrator | 2026-03-19 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:30.353954 | orchestrator | 2026-03-19 00:55:30 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:30.354795 | orchestrator | 2026-03-19 00:55:30 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:30.355952 | orchestrator | 2026-03-19 00:55:30 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:30.355995 | orchestrator | 2026-03-19 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:33.406642 | orchestrator | 2026-03-19 00:55:33 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:33.407665 | orchestrator | 2026-03-19 00:55:33 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:33.409674 | orchestrator | 2026-03-19 00:55:33 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:33.409721 | orchestrator | 2026-03-19 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:36.455070 | orchestrator | 2026-03-19 00:55:36 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:36.457888 | orchestrator | 2026-03-19 00:55:36 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:36.460056 | orchestrator | 2026-03-19 00:55:36 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:36.460100 | orchestrator | 2026-03-19 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:39.512057 | orchestrator | 2026-03-19 00:55:39 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:39.514145 | orchestrator | 2026-03-19 00:55:39 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:39.517473 | orchestrator | 2026-03-19 00:55:39 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:39.517525 | orchestrator | 2026-03-19 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:42.555793 | orchestrator | 2026-03-19 00:55:42 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:42.555859 | orchestrator | 2026-03-19 00:55:42 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:42.556034 | orchestrator | 2026-03-19 00:55:42 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:42.556408 | orchestrator | 2026-03-19 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:45.592567 | orchestrator | 2026-03-19 00:55:45 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:45.596023 | orchestrator | 2026-03-19 00:55:45 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:45.598985 | orchestrator | 2026-03-19 00:55:45 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:45.599461 | orchestrator | 2026-03-19 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:48.638325 | orchestrator | 2026-03-19 00:55:48 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:48.639255 | orchestrator | 2026-03-19 00:55:48 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:48.640104 | orchestrator | 2026-03-19 00:55:48 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:48.640323 | orchestrator | 2026-03-19 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:51.684026 | orchestrator | 2026-03-19 00:55:51 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:51.687456 | orchestrator | 2026-03-19 00:55:51 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:51.689070 | orchestrator | 2026-03-19 00:55:51 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:51.689405 | orchestrator | 2026-03-19 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:54.750189 | orchestrator | 2026-03-19 00:55:54 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:54.754396 | orchestrator | 2026-03-19 00:55:54 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:54.756522 | orchestrator | 2026-03-19 00:55:54 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:54.756667 | orchestrator | 2026-03-19 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:55:57.815610 | orchestrator | 2026-03-19 00:55:57 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:55:57.816849 | orchestrator | 2026-03-19 00:55:57 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:55:57.819033 | orchestrator | 2026-03-19 00:55:57 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:55:57.819104 | orchestrator | 2026-03-19 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:00.863506 | orchestrator | 2026-03-19 00:56:00 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:56:00.865087 | orchestrator | 2026-03-19 00:56:00 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:56:00.867133 | orchestrator | 2026-03-19 00:56:00 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:00.867184 | orchestrator | 2026-03-19 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:03.919577 | orchestrator | 2026-03-19 00:56:03 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:56:03.922318 | orchestrator | 2026-03-19 00:56:03 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:56:03.924121 | orchestrator | 2026-03-19 00:56:03 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:03.924200 | orchestrator | 2026-03-19 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:06.971353 | orchestrator | 2026-03-19 00:56:06 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:56:06.971622 | orchestrator | 2026-03-19 00:56:06 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:56:06.973127 | orchestrator | 2026-03-19 00:56:06 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:06.973179 | orchestrator | 2026-03-19 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:10.022472 | orchestrator | 2026-03-19 00:56:10 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:56:10.024172 | orchestrator | 2026-03-19 00:56:10 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:56:10.026071 | orchestrator | 2026-03-19 00:56:10 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:10.026129 | orchestrator | 2026-03-19 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:13.069123 | orchestrator | 2026-03-19 00:56:13 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:56:13.069659 | orchestrator | 2026-03-19 00:56:13 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:56:13.070568 | orchestrator | 2026-03-19 00:56:13 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:13.070606 | orchestrator | 2026-03-19 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:16.118809 | orchestrator | 2026-03-19 00:56:16 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:56:16.120776 | orchestrator | 2026-03-19 00:56:16 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:56:16.122502 | orchestrator | 2026-03-19 00:56:16 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:16.122636 | orchestrator | 2026-03-19 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:19.168927 | orchestrator | 2026-03-19 00:56:19 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:56:19.171493 | orchestrator | 2026-03-19 00:56:19 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:56:19.177759 | orchestrator | 2026-03-19 00:56:19 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:19.177809 | orchestrator | 2026-03-19 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:22.223245 | orchestrator | 2026-03-19 00:56:22 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:56:22.224645 | orchestrator | 2026-03-19 00:56:22 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:56:22.226436 | orchestrator | 2026-03-19 00:56:22 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:22.226474 | orchestrator | 2026-03-19 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:25.278367 | orchestrator | 2026-03-19 00:56:25 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:56:25.280027 | orchestrator | 2026-03-19 00:56:25 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:56:25.282332 | orchestrator | 2026-03-19 00:56:25 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:25.282876 | orchestrator | 2026-03-19 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:28.332953 | orchestrator | 2026-03-19 00:56:28 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:56:28.333699 | orchestrator | 2026-03-19 00:56:28 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:56:28.334418 | orchestrator | 2026-03-19 00:56:28 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:28.334777 | orchestrator | 2026-03-19 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:31.382054 | orchestrator | 2026-03-19 00:56:31 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:56:31.383729 | orchestrator | 2026-03-19 00:56:31 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:56:31.385594 | orchestrator | 2026-03-19 00:56:31 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:31.385675 | orchestrator | 2026-03-19 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:34.425612 | orchestrator | 2026-03-19 00:56:34 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:56:34.426141 | orchestrator | 2026-03-19 00:56:34 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:56:34.427923 | orchestrator | 2026-03-19 00:56:34 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:34.428097 | orchestrator | 2026-03-19 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:37.486243 | orchestrator | 2026-03-19 00:56:37 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:56:37.489488 | orchestrator | 2026-03-19 00:56:37 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state STARTED 2026-03-19 00:56:37.491731 | orchestrator | 2026-03-19 00:56:37 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:37.491784 | orchestrator | 2026-03-19 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:40.548743 | orchestrator | 2026-03-19 00:56:40 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:56:40.552571 | orchestrator | 2026-03-19 00:56:40 | INFO  | Task aeebe991-d677-4bc2-8203-aabf471843cd is in state SUCCESS 2026-03-19 00:56:40.553771 | orchestrator | 2026-03-19 00:56:40.553818 | orchestrator | 2026-03-19 00:56:40.553823 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 00:56:40.553828 | orchestrator | 2026-03-19 00:56:40.553832 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 00:56:40.553845 | orchestrator | Thursday 19 March 2026 00:54:02 +0000 (0:00:00.324) 0:00:00.324 ******** 2026-03-19 00:56:40.553849 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:40.553854 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:40.553860 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:40.553867 | orchestrator | 2026-03-19 00:56:40.553874 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 00:56:40.553881 | orchestrator | Thursday 19 March 2026 00:54:02 +0000 (0:00:00.267) 0:00:00.592 ******** 2026-03-19 00:56:40.553888 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-19 00:56:40.553895 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-19 00:56:40.553925 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-19 00:56:40.553933 | orchestrator | 2026-03-19 00:56:40.553940 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-19 00:56:40.553986 | orchestrator | 2026-03-19 00:56:40.553992 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-19 00:56:40.553999 | orchestrator | Thursday 19 March 2026 00:54:02 +0000 (0:00:00.282) 0:00:00.875 ******** 2026-03-19 00:56:40.554005 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:40.554041 | orchestrator | 2026-03-19 00:56:40.554056 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-19 00:56:40.554067 | orchestrator | Thursday 19 March 2026 00:54:03 +0000 (0:00:00.575) 0:00:01.450 ******** 2026-03-19 00:56:40.554074 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-19 00:56:40.554080 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-19 00:56:40.554086 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-19 00:56:40.554093 | orchestrator | 2026-03-19 00:56:40.554099 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-19 00:56:40.554105 | orchestrator | Thursday 19 March 2026 00:54:04 +0000 (0:00:00.970) 0:00:02.420 ******** 2026-03-19 00:56:40.554113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 00:56:40.554122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 00:56:40.554139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 00:56:40.554151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 00:56:40.554164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 00:56:40.554171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 00:56:40.554178 | orchestrator | 2026-03-19 00:56:40.554184 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-19 00:56:40.554191 | orchestrator | Thursday 19 March 2026 00:54:05 +0000 (0:00:01.269) 0:00:03.690 ******** 2026-03-19 00:56:40.554197 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:40.554203 | orchestrator | 2026-03-19 00:56:40.554210 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-19 00:56:40.554216 | orchestrator | Thursday 19 March 2026 00:54:06 +0000 (0:00:00.536) 0:00:04.227 ******** 2026-03-19 00:56:40.554230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 00:56:40.554241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 00:56:40.554253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 00:56:40.554264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 00:56:40.554274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 00:56:40.554287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 00:56:40.554294 | orchestrator | 2026-03-19 00:56:40.554301 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-19 00:56:40.554307 | orchestrator | Thursday 19 March 2026 00:54:08 +0000 (0:00:02.773) 0:00:07.000 ******** 2026-03-19 00:56:40.554313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 00:56:40.554320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 00:56:40.554327 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:40.554338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 00:56:40.554356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 00:56:40.554363 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:40.554369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 00:56:40.554376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 00:56:40.554383 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:40.554390 | orchestrator | 2026-03-19 00:56:40.554397 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-19 00:56:40.554403 | orchestrator | Thursday 19 March 2026 00:54:09 +0000 (0:00:00.883) 0:00:07.884 ******** 2026-03-19 00:56:40.554410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 00:56:40.554425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 00:56:40.554432 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:40.554439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 00:56:40.554447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 00:56:40.554508 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:40.554522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 00:56:40.554562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 00:56:40.554572 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:40.554579 | orchestrator | 2026-03-19 00:56:40.554586 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-19 00:56:40.554593 | orchestrator | Thursday 19 March 2026 00:54:10 +0000 (0:00:00.707) 0:00:08.591 ******** 2026-03-19 00:56:40.554600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 00:56:40.554607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 00:56:40.554626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 00:56:40.554644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 00:56:40.554653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 00:56:40.554661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 00:56:40.554668 | orchestrator | 2026-03-19 00:56:40.554675 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-19 00:56:40.554682 | orchestrator | Thursday 19 March 2026 00:54:13 +0000 (0:00:02.639) 0:00:11.231 ******** 2026-03-19 00:56:40.554689 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:40.554695 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:40.554702 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:40.554709 | orchestrator | 2026-03-19 00:56:40.554719 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-19 00:56:40.554725 | orchestrator | Thursday 19 March 2026 00:54:15 +0000 (0:00:02.894) 0:00:14.125 ******** 2026-03-19 00:56:40.554732 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:40.554739 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:40.554745 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:40.554752 | orchestrator | 2026-03-19 00:56:40.554758 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-19 00:56:40.554765 | orchestrator | Thursday 19 March 2026 00:54:17 +0000 (0:00:01.717) 0:00:15.842 ******** 2026-03-19 00:56:40.554771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 00:56:40.554905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 00:56:40.554915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 00:56:40.554922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 00:56:40.554933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 00:56:40.554947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 00:56:40.554955 | orchestrator | 2026-03-19 00:56:40.554961 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-19 00:56:40.554968 | orchestrator | Thursday 19 March 2026 00:54:20 +0000 (0:00:02.789) 0:00:18.631 ******** 2026-03-19 00:56:40.554975 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:40.554982 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:40.554989 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:40.554996 | orchestrator | 2026-03-19 00:56:40.555003 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-19 00:56:40.555010 | orchestrator | Thursday 19 March 2026 00:54:20 +0000 (0:00:00.471) 0:00:19.103 ******** 2026-03-19 00:56:40.555016 | orchestrator | 2026-03-19 00:56:40.555022 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-19 00:56:40.555028 | orchestrator | Thursday 19 March 2026 00:54:21 +0000 (0:00:00.084) 0:00:19.188 ******** 2026-03-19 00:56:40.555034 | orchestrator | 2026-03-19 00:56:40.555040 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-19 00:56:40.555047 | orchestrator | Thursday 19 March 2026 00:54:21 +0000 (0:00:00.069) 0:00:19.257 ******** 2026-03-19 00:56:40.555053 | orchestrator | 2026-03-19 00:56:40.555059 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-19 00:56:40.555066 | orchestrator | Thursday 19 March 2026 00:54:21 +0000 (0:00:00.069) 0:00:19.326 ******** 2026-03-19 00:56:40.555072 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:40.555078 | orchestrator | 2026-03-19 00:56:40.555084 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-19 00:56:40.555089 | orchestrator | Thursday 19 March 2026 00:54:21 +0000 (0:00:00.194) 0:00:19.520 ******** 2026-03-19 00:56:40.555101 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:40.555107 | orchestrator | 2026-03-19 00:56:40.555113 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-19 00:56:40.555120 | orchestrator | Thursday 19 March 2026 00:54:21 +0000 (0:00:00.196) 0:00:19.716 ******** 2026-03-19 00:56:40.555126 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:40.555132 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:40.555138 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:40.555145 | orchestrator | 2026-03-19 00:56:40.555151 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-19 00:56:40.555158 | orchestrator | Thursday 19 March 2026 00:55:16 +0000 (0:00:54.784) 0:01:14.501 ******** 2026-03-19 00:56:40.555165 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:40.555171 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:40.555178 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:40.555185 | orchestrator | 2026-03-19 00:56:40.555191 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-19 00:56:40.555198 | orchestrator | Thursday 19 March 2026 00:56:27 +0000 (0:01:11.082) 0:02:25.584 ******** 2026-03-19 00:56:40.555205 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:40.555210 | orchestrator | 2026-03-19 00:56:40.555214 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-19 00:56:40.555218 | orchestrator | Thursday 19 March 2026 00:56:28 +0000 (0:00:00.644) 0:02:26.228 ******** 2026-03-19 00:56:40.555222 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:40.555226 | orchestrator | 2026-03-19 00:56:40.555230 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-19 00:56:40.555234 | orchestrator | Thursday 19 March 2026 00:56:30 +0000 (0:00:02.031) 0:02:28.260 ******** 2026-03-19 00:56:40.555237 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:40.555241 | orchestrator | 2026-03-19 00:56:40.555245 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-19 00:56:40.555249 | orchestrator | Thursday 19 March 2026 00:56:32 +0000 (0:00:01.956) 0:02:30.217 ******** 2026-03-19 00:56:40.555252 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:40.555256 | orchestrator | 2026-03-19 00:56:40.555260 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-19 00:56:40.555264 | orchestrator | Thursday 19 March 2026 00:56:34 +0000 (0:00:02.074) 0:02:32.292 ******** 2026-03-19 00:56:40.555268 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:40.555272 | orchestrator | 2026-03-19 00:56:40.555275 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-19 00:56:40.555279 | orchestrator | Thursday 19 March 2026 00:56:36 +0000 (0:00:02.109) 0:02:34.401 ******** 2026-03-19 00:56:40.555283 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:40.555286 | orchestrator | 2026-03-19 00:56:40.555290 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:56:40.555294 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 00:56:40.555299 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 00:56:40.555306 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 00:56:40.555310 | orchestrator | 2026-03-19 00:56:40.555314 | orchestrator | 2026-03-19 00:56:40.555317 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:56:40.555324 | orchestrator | Thursday 19 March 2026 00:56:38 +0000 (0:00:02.114) 0:02:36.516 ******** 2026-03-19 00:56:40.555328 | orchestrator | =============================================================================== 2026-03-19 00:56:40.555335 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 71.08s 2026-03-19 00:56:40.555339 | orchestrator | opensearch : Restart opensearch container ------------------------------ 54.78s 2026-03-19 00:56:40.555343 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.89s 2026-03-19 00:56:40.555347 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.79s 2026-03-19 00:56:40.555351 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.77s 2026-03-19 00:56:40.555354 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.64s 2026-03-19 00:56:40.555358 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.12s 2026-03-19 00:56:40.555362 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.11s 2026-03-19 00:56:40.555365 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.07s 2026-03-19 00:56:40.555369 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.03s 2026-03-19 00:56:40.555373 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 1.96s 2026-03-19 00:56:40.555376 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.72s 2026-03-19 00:56:40.555380 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.27s 2026-03-19 00:56:40.555384 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.97s 2026-03-19 00:56:40.555388 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.88s 2026-03-19 00:56:40.555392 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.71s 2026-03-19 00:56:40.555395 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.64s 2026-03-19 00:56:40.555399 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.58s 2026-03-19 00:56:40.555403 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-03-19 00:56:40.555407 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2026-03-19 00:56:40.555422 | orchestrator | 2026-03-19 00:56:40 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:40.555427 | orchestrator | 2026-03-19 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:43.593774 | orchestrator | 2026-03-19 00:56:43 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state STARTED 2026-03-19 00:56:43.596392 | orchestrator | 2026-03-19 00:56:43 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:43.596738 | orchestrator | 2026-03-19 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:46.644017 | orchestrator | 2026-03-19 00:56:46 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:56:46.650136 | orchestrator | 2026-03-19 00:56:46 | INFO  | Task b59676d3-0a83-4b90-8085-08f32ca42157 is in state SUCCESS 2026-03-19 00:56:46.652761 | orchestrator | 2026-03-19 00:56:46.652851 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-19 00:56:46.652878 | orchestrator | 2.16.14 2026-03-19 00:56:46.652898 | orchestrator | 2026-03-19 00:56:46.652912 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-19 00:56:46.652927 | orchestrator | 2026-03-19 00:56:46.652940 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 00:56:46.652955 | orchestrator | Thursday 19 March 2026 00:45:42 +0000 (0:00:00.746) 0:00:00.746 ******** 2026-03-19 00:56:46.652971 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.652987 | orchestrator | 2026-03-19 00:56:46.653001 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 00:56:46.653042 | orchestrator | Thursday 19 March 2026 00:45:43 +0000 (0:00:01.181) 0:00:01.928 ******** 2026-03-19 00:56:46.653057 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.653185 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.653198 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.653212 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.653234 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.653250 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.653264 | orchestrator | 2026-03-19 00:56:46.653279 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 00:56:46.653295 | orchestrator | Thursday 19 March 2026 00:45:45 +0000 (0:00:02.457) 0:00:04.385 ******** 2026-03-19 00:56:46.653304 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.653313 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.653323 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.653334 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.653343 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.653353 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.653362 | orchestrator | 2026-03-19 00:56:46.653372 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 00:56:46.653382 | orchestrator | Thursday 19 March 2026 00:45:46 +0000 (0:00:00.668) 0:00:05.054 ******** 2026-03-19 00:56:46.653392 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.653401 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.653410 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.653419 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.653429 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.653451 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.653461 | orchestrator | 2026-03-19 00:56:46.653472 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 00:56:46.653482 | orchestrator | Thursday 19 March 2026 00:45:47 +0000 (0:00:00.977) 0:00:06.031 ******** 2026-03-19 00:56:46.653496 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.653518 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.653536 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.653550 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.653566 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.653583 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.653594 | orchestrator | 2026-03-19 00:56:46.653625 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 00:56:46.653635 | orchestrator | Thursday 19 March 2026 00:45:48 +0000 (0:00:00.887) 0:00:06.918 ******** 2026-03-19 00:56:46.653645 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.653655 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.653664 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.653674 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.653683 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.653692 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.653702 | orchestrator | 2026-03-19 00:56:46.653711 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 00:56:46.653721 | orchestrator | Thursday 19 March 2026 00:45:49 +0000 (0:00:00.938) 0:00:07.857 ******** 2026-03-19 00:56:46.653731 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.653741 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.653750 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.653828 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.653838 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.653847 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.653862 | orchestrator | 2026-03-19 00:56:46.653959 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 00:56:46.653977 | orchestrator | Thursday 19 March 2026 00:45:50 +0000 (0:00:01.140) 0:00:08.997 ******** 2026-03-19 00:56:46.653993 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.654010 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.654102 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.654118 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.654149 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.654159 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.654169 | orchestrator | 2026-03-19 00:56:46.654179 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 00:56:46.654190 | orchestrator | Thursday 19 March 2026 00:45:51 +0000 (0:00:00.900) 0:00:09.898 ******** 2026-03-19 00:56:46.654200 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.654209 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.654218 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.654228 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.654239 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.654255 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.654278 | orchestrator | 2026-03-19 00:56:46.654293 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 00:56:46.654309 | orchestrator | Thursday 19 March 2026 00:45:52 +0000 (0:00:00.772) 0:00:10.670 ******** 2026-03-19 00:56:46.654323 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 00:56:46.654380 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 00:56:46.654399 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 00:56:46.654417 | orchestrator | 2026-03-19 00:56:46.654434 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 00:56:46.654451 | orchestrator | Thursday 19 March 2026 00:45:52 +0000 (0:00:00.469) 0:00:11.140 ******** 2026-03-19 00:56:46.654662 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.654677 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.654686 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.654711 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.654722 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.654732 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.654742 | orchestrator | 2026-03-19 00:56:46.654752 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 00:56:46.654763 | orchestrator | Thursday 19 March 2026 00:45:54 +0000 (0:00:01.672) 0:00:12.812 ******** 2026-03-19 00:56:46.654773 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 00:56:46.654783 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 00:56:46.654811 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 00:56:46.654821 | orchestrator | 2026-03-19 00:56:46.654830 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 00:56:46.654840 | orchestrator | Thursday 19 March 2026 00:45:57 +0000 (0:00:03.088) 0:00:15.901 ******** 2026-03-19 00:56:46.654850 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 00:56:46.654860 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 00:56:46.654869 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 00:56:46.654879 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.654889 | orchestrator | 2026-03-19 00:56:46.654899 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 00:56:46.654910 | orchestrator | Thursday 19 March 2026 00:45:58 +0000 (0:00:00.646) 0:00:16.547 ******** 2026-03-19 00:56:46.654921 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.654941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.654981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.655001 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.655082 | orchestrator | 2026-03-19 00:56:46.655094 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 00:56:46.655104 | orchestrator | Thursday 19 March 2026 00:45:59 +0000 (0:00:01.346) 0:00:17.894 ******** 2026-03-19 00:56:46.655140 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.655199 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.655211 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.655222 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.655232 | orchestrator | 2026-03-19 00:56:46.655242 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 00:56:46.655253 | orchestrator | Thursday 19 March 2026 00:45:59 +0000 (0:00:00.240) 0:00:18.135 ******** 2026-03-19 00:56:46.655278 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 00:45:55.374017', 'end': '2026-03-19 00:45:55.483124', 'delta': '0:00:00.109107', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.655298 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 00:45:56.266393', 'end': '2026-03-19 00:45:56.366360', 'delta': '0:00:00.099967', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.655350 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 00:45:57.159171', 'end': '2026-03-19 00:45:57.250078', 'delta': '0:00:00.090907', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.655380 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.655395 | orchestrator | 2026-03-19 00:56:46.655411 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 00:56:46.655427 | orchestrator | Thursday 19 March 2026 00:46:00 +0000 (0:00:00.850) 0:00:18.986 ******** 2026-03-19 00:56:46.655444 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.655460 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.655474 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.655490 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.655506 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.655522 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.655538 | orchestrator | 2026-03-19 00:56:46.655555 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 00:56:46.655571 | orchestrator | Thursday 19 March 2026 00:46:02 +0000 (0:00:02.398) 0:00:21.384 ******** 2026-03-19 00:56:46.655588 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 00:56:46.655742 | orchestrator | 2026-03-19 00:56:46.655765 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 00:56:46.655866 | orchestrator | Thursday 19 March 2026 00:46:03 +0000 (0:00:00.835) 0:00:22.220 ******** 2026-03-19 00:56:46.655881 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.655891 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.655906 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.655921 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.655935 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.655949 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.655961 | orchestrator | 2026-03-19 00:56:46.655976 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 00:56:46.655991 | orchestrator | Thursday 19 March 2026 00:46:05 +0000 (0:00:01.493) 0:00:23.713 ******** 2026-03-19 00:56:46.656037 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.656049 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.656058 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.656066 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.656075 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.656084 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.656093 | orchestrator | 2026-03-19 00:56:46.656101 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 00:56:46.656110 | orchestrator | Thursday 19 March 2026 00:46:06 +0000 (0:00:01.424) 0:00:25.138 ******** 2026-03-19 00:56:46.656119 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.656127 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.656135 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.656144 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.656152 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.656161 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.656169 | orchestrator | 2026-03-19 00:56:46.656178 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 00:56:46.656204 | orchestrator | Thursday 19 March 2026 00:46:07 +0000 (0:00:00.838) 0:00:25.976 ******** 2026-03-19 00:56:46.656214 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.656223 | orchestrator | 2026-03-19 00:56:46.656231 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 00:56:46.656240 | orchestrator | Thursday 19 March 2026 00:46:07 +0000 (0:00:00.195) 0:00:26.171 ******** 2026-03-19 00:56:46.656249 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.656275 | orchestrator | 2026-03-19 00:56:46.656285 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 00:56:46.656305 | orchestrator | Thursday 19 March 2026 00:46:07 +0000 (0:00:00.168) 0:00:26.340 ******** 2026-03-19 00:56:46.656313 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.656382 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.656391 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.656434 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.656444 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.656452 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.656459 | orchestrator | 2026-03-19 00:56:46.656467 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 00:56:46.656478 | orchestrator | Thursday 19 March 2026 00:46:08 +0000 (0:00:00.469) 0:00:26.809 ******** 2026-03-19 00:56:46.656523 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.656538 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.656553 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.656566 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.656580 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.656595 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.656630 | orchestrator | 2026-03-19 00:56:46.656644 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 00:56:46.656657 | orchestrator | Thursday 19 March 2026 00:46:09 +0000 (0:00:00.926) 0:00:27.736 ******** 2026-03-19 00:56:46.656670 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.656684 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.656696 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.656709 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.656722 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.656735 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.656749 | orchestrator | 2026-03-19 00:56:46.656762 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 00:56:46.656776 | orchestrator | Thursday 19 March 2026 00:46:10 +0000 (0:00:00.897) 0:00:28.633 ******** 2026-03-19 00:56:46.656789 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.656801 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.656810 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.656817 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.656869 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.656880 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.656929 | orchestrator | 2026-03-19 00:56:46.656978 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 00:56:46.656995 | orchestrator | Thursday 19 March 2026 00:46:11 +0000 (0:00:00.977) 0:00:29.611 ******** 2026-03-19 00:56:46.657009 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.657022 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.657035 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.657057 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.657072 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.657083 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.657091 | orchestrator | 2026-03-19 00:56:46.657099 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 00:56:46.657107 | orchestrator | Thursday 19 March 2026 00:46:11 +0000 (0:00:00.564) 0:00:30.176 ******** 2026-03-19 00:56:46.657114 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.657122 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.657131 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.657138 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.657146 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.657154 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.657161 | orchestrator | 2026-03-19 00:56:46.657170 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 00:56:46.657178 | orchestrator | Thursday 19 March 2026 00:46:12 +0000 (0:00:00.665) 0:00:30.841 ******** 2026-03-19 00:56:46.657185 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.657202 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.657210 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.657220 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.657233 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.657253 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.657319 | orchestrator | 2026-03-19 00:56:46.657335 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 00:56:46.657350 | orchestrator | Thursday 19 March 2026 00:46:12 +0000 (0:00:00.628) 0:00:31.469 ******** 2026-03-19 00:56:46.657367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0-osd--block--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0', 'dm-uuid-LVM-wGHRZdQDjg7vWurNEdhtc2UbI834lJn3dmVIrhekVpy3FO1O1xKqGaZmVIfMMr3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d672a78a--4132--5655--a0fe--bae0f8eb714c-osd--block--d672a78a--4132--5655--a0fe--bae0f8eb714c', 'dm-uuid-LVM-YJ5R6ssJBZnSwomj4KA118jQLucuu9g7fKyyCMhU750XfMum9yqZRg037CQJJiqS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c9339aa0--dcb3--5462--b16c--1d446efe678c-osd--block--c9339aa0--dcb3--5462--b16c--1d446efe678c', 'dm-uuid-LVM-7N41ZUFIMAXsQSUepdaXTlYgVduEAh0mYbywt0PbMF6rvfnHGFOKoqx1SYb7yfJz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0813f2fe--0b5e--5f32--866c--c0f68041cbc1-osd--block--0813f2fe--0b5e--5f32--866c--c0f68041cbc1', 'dm-uuid-LVM-dCofXil7JsY0aXuuqmsFXceNZQjGuIC9lL6jKguWcjVZBueY2muhAfprIfKqF9se'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.657963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part1', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part14', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part15', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part16', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.657989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part1', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part14', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part15', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part16', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658058 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0-osd--block--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2f61p2-jEyl-RpgU-sj5H-HS7W-v4rc-bkHIfD', 'scsi-0QEMU_QEMU_HARDDISK_cc7e233c-8cac-4df4-a011-c93cbddae1f1', 'scsi-SQEMU_QEMU_HARDDISK_cc7e233c-8cac-4df4-a011-c93cbddae1f1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d672a78a--4132--5655--a0fe--bae0f8eb714c-osd--block--d672a78a--4132--5655--a0fe--bae0f8eb714c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i3Cm88-eUVQ-T5g2-dPBI-tgHR-J0r6-11VZ1M', 'scsi-0QEMU_QEMU_HARDDISK_184b1ce1-ca15-4595-9678-ac68bfb03600', 'scsi-SQEMU_QEMU_HARDDISK_184b1ce1-ca15-4595-9678-ac68bfb03600'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5727df1-f3c7-4916-bc14-eaaddd40c7b3', 'scsi-SQEMU_QEMU_HARDDISK_d5727df1-f3c7-4916-bc14-eaaddd40c7b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c9339aa0--dcb3--5462--b16c--1d446efe678c-osd--block--c9339aa0--dcb3--5462--b16c--1d446efe678c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nvF6bc-JV0t-rctN-oq69-66zh-uec0-1pLf1I', 'scsi-0QEMU_QEMU_HARDDISK_4a32226a-f2aa-4275-90ab-4bb1f7d2ca8f', 'scsi-SQEMU_QEMU_HARDDISK_4a32226a-f2aa-4275-90ab-4bb1f7d2ca8f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0813f2fe--0b5e--5f32--866c--c0f68041cbc1-osd--block--0813f2fe--0b5e--5f32--866c--c0f68041cbc1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UyvJR0-eWJa-VJQz-wPxK-2odC-cvUy-VOQer1', 'scsi-0QEMU_QEMU_HARDDISK_29b2bcaa-94cd-4bc7-8ad5-d8dbf0337361', 'scsi-SQEMU_QEMU_HARDDISK_29b2bcaa-94cd-4bc7-8ad5-d8dbf0337361'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c44a5407-fe6a-4dde-8ed4-2e9072e3ed0d', 'scsi-SQEMU_QEMU_HARDDISK_c44a5407-fe6a-4dde-8ed4-2e9072e3ed0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658208 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.658226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7952abd--f19d--5f54--b846--7c46d615b8fb-osd--block--f7952abd--f19d--5f54--b846--7c46d615b8fb', 'dm-uuid-LVM-kSU7NOpZdrx1DM0VxQW2rlgZLxiojUbqfvtvBF0d8sWGc9vxnyKtJ8R9Cw6mmxfP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--056512d9--3a02--5302--afc2--fa0158449af3-osd--block--056512d9--3a02--5302--afc2--fa0158449af3', 'dm-uuid-LVM-XJS6QCxb3Z3bSJ0LVsY39xUM9q1hATkVetyxEGek35uW73tkjXLoTbJvAfnRxCRU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part1', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part14', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part15', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part16', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f7952abd--f19d--5f54--b846--7c46d615b8fb-osd--block--f7952abd--f19d--5f54--b846--7c46d615b8fb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b82v9G-2Ska-2RTK-iDfN-Mq85-FRiq-DBlpZs', 'scsi-0QEMU_QEMU_HARDDISK_2e1c1462-5959-44f4-a623-e25e33d313c5', 'scsi-SQEMU_QEMU_HARDDISK_2e1c1462-5959-44f4-a623-e25e33d313c5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--056512d9--3a02--5302--afc2--fa0158449af3-osd--block--056512d9--3a02--5302--afc2--fa0158449af3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-52XQY0-ueRj-IBB7-FKHA-4Vnm-xluU-ldZA0L', 'scsi-0QEMU_QEMU_HARDDISK_28009f26-7505-45c2-833e-d396e3f8b400', 'scsi-SQEMU_QEMU_HARDDISK_28009f26-7505-45c2-833e-d396e3f8b400'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4aaa0c2-0099-489f-8e98-802ea2f51c85', 'scsi-SQEMU_QEMU_HARDDISK_e4aaa0c2-0099-489f-8e98-802ea2f51c85'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658476 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.658488 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.658500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4', 'scsi-SQEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658787 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.658803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db', 'scsi-SQEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part1', 'scsi-SQEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part14', 'scsi-SQEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part15', 'scsi-SQEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part16', 'scsi-SQEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658828 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.658835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:56:46.658903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99', 'scsi-SQEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part1', 'scsi-SQEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part14', 'scsi-SQEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part15', 'scsi-SQEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part16', 'scsi-SQEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:56:46.658934 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.658946 | orchestrator | 2026-03-19 00:56:46.658958 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 00:56:46.658966 | orchestrator | Thursday 19 March 2026 00:46:14 +0000 (0:00:01.587) 0:00:33.057 ******** 2026-03-19 00:56:46.658974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0-osd--block--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0', 'dm-uuid-LVM-wGHRZdQDjg7vWurNEdhtc2UbI834lJn3dmVIrhekVpy3FO1O1xKqGaZmVIfMMr3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.658986 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d672a78a--4132--5655--a0fe--bae0f8eb714c-osd--block--d672a78a--4132--5655--a0fe--bae0f8eb714c', 'dm-uuid-LVM-YJ5R6ssJBZnSwomj4KA118jQLucuu9g7fKyyCMhU750XfMum9yqZRg037CQJJiqS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.658994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659001 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659041 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659060 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659068 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659080 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659092 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659104 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659153 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part1', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part14', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part15', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part16', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659180 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0-osd--block--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2f61p2-jEyl-RpgU-sj5H-HS7W-v4rc-bkHIfD', 'scsi-0QEMU_QEMU_HARDDISK_cc7e233c-8cac-4df4-a011-c93cbddae1f1', 'scsi-SQEMU_QEMU_HARDDISK_cc7e233c-8cac-4df4-a011-c93cbddae1f1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659192 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d672a78a--4132--5655--a0fe--bae0f8eb714c-osd--block--d672a78a--4132--5655--a0fe--bae0f8eb714c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i3Cm88-eUVQ-T5g2-dPBI-tgHR-J0r6-11VZ1M', 'scsi-0QEMU_QEMU_HARDDISK_184b1ce1-ca15-4595-9678-ac68bfb03600', 'scsi-SQEMU_QEMU_HARDDISK_184b1ce1-ca15-4595-9678-ac68bfb03600'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659204 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5727df1-f3c7-4916-bc14-eaaddd40c7b3', 'scsi-SQEMU_QEMU_HARDDISK_d5727df1-f3c7-4916-bc14-eaaddd40c7b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659230 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659242 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c9339aa0--dcb3--5462--b16c--1d446efe678c-osd--block--c9339aa0--dcb3--5462--b16c--1d446efe678c', 'dm-uuid-LVM-7N41ZUFIMAXsQSUepdaXTlYgVduEAh0mYbywt0PbMF6rvfnHGFOKoqx1SYb7yfJz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659328 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0813f2fe--0b5e--5f32--866c--c0f68041cbc1-osd--block--0813f2fe--0b5e--5f32--866c--c0f68041cbc1', 'dm-uuid-LVM-dCofXil7JsY0aXuuqmsFXceNZQjGuIC9lL6jKguWcjVZBueY2muhAfprIfKqF9se'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659342 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659354 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659372 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659390 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659403 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7952abd--f19d--5f54--b846--7c46d615b8fb-osd--block--f7952abd--f19d--5f54--b846--7c46d615b8fb', 'dm-uuid-LVM-kSU7NOpZdrx1DM0VxQW2rlgZLxiojUbqfvtvBF0d8sWGc9vxnyKtJ8R9Cw6mmxfP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659419 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659431 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--056512d9--3a02--5302--afc2--fa0158449af3-osd--block--056512d9--3a02--5302--afc2--fa0158449af3', 'dm-uuid-LVM-XJS6QCxb3Z3bSJ0LVsY39xUM9q1hATkVetyxEGek35uW73tkjXLoTbJvAfnRxCRU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659443 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659462 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659475 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.659491 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659504 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659516 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659532 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659544 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659556 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659577 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659595 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659634 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part1', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part14', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part15', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part16', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659668 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659720 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659742 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659755 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659772 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c9339aa0--dcb3--5462--b16c--1d446efe678c-osd--block--c9339aa0--dcb3--5462--b16c--1d446efe678c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nvF6bc-JV0t-rctN-oq69-66zh-uec0-1pLf1I', 'scsi-0QEMU_QEMU_HARDDISK_4a32226a-f2aa-4275-90ab-4bb1f7d2ca8f', 'scsi-SQEMU_QEMU_HARDDISK_4a32226a-f2aa-4275-90ab-4bb1f7d2ca8f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659784 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659814 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part1', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part14', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part15', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part16', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659828 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659883 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0813f2fe--0b5e--5f32--866c--c0f68041cbc1-osd--block--0813f2fe--0b5e--5f32--866c--c0f68041cbc1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UyvJR0-eWJa-VJQz-wPxK-2odC-cvUy-VOQer1', 'scsi-0QEMU_QEMU_HARDDISK_29b2bcaa-94cd-4bc7-8ad5-d8dbf0337361', 'scsi-SQEMU_QEMU_HARDDISK_29b2bcaa-94cd-4bc7-8ad5-d8dbf0337361'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659930 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f7952abd--f19d--5f54--b846--7c46d615b8fb-osd--block--f7952abd--f19d--5f54--b846--7c46d615b8fb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b82v9G-2Ska-2RTK-iDfN-Mq85-FRiq-DBlpZs', 'scsi-0QEMU_QEMU_HARDDISK_2e1c1462-5959-44f4-a623-e25e33d313c5', 'scsi-SQEMU_QEMU_HARDDISK_2e1c1462-5959-44f4-a623-e25e33d313c5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--056512d9--3a02--5302--afc2--fa0158449af3-osd--block--056512d9--3a02--5302--afc2--fa0158449af3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-52XQY0-ueRj-IBB7-FKHA-4Vnm-xluU-ldZA0L', 'scsi-0QEMU_QEMU_HARDDISK_28009f26-7505-45c2-833e-d396e3f8b400', 'scsi-SQEMU_QEMU_HARDDISK_28009f26-7505-45c2-833e-d396e3f8b400'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659963 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659980 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4aaa0c2-0099-489f-8e98-802ea2f51c85', 'scsi-SQEMU_QEMU_HARDDISK_e4aaa0c2-0099-489f-8e98-802ea2f51c85'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.659993 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660012 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660024 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c44a5407-fe6a-4dde-8ed4-2e9072e3ed0d', 'scsi-SQEMU_QEMU_HARDDISK_c44a5407-fe6a-4dde-8ed4-2e9072e3ed0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660041 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660055 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660074 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db', 'scsi-SQEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part1', 'scsi-SQEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part14', 'scsi-SQEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part15', 'scsi-SQEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part16', 'scsi-SQEMU_QEMU_HARDDISK_9a641ab7-974c-4f28-9787-11bbad1144db-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660123 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660133 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660141 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660158 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660178 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660190 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660202 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660634 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660660 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660681 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4', 'scsi-SQEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a8b28d7-84b9-47da-9987-4ea2478cc2a4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660702 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660714 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.660732 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.660744 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.660756 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.660779 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660800 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660821 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660839 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660851 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660863 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660880 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660892 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660905 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99', 'scsi-SQEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part1', 'scsi-SQEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part14', 'scsi-SQEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part15', 'scsi-SQEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part16', 'scsi-SQEMU_QEMU_HARDDISK_8e5d4363-16ee-4997-bf0a-5f4ad4cc8f99-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660923 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:56:46.660953 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.660965 | orchestrator | 2026-03-19 00:56:46.660981 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 00:56:46.660994 | orchestrator | Thursday 19 March 2026 00:46:16 +0000 (0:00:01.798) 0:00:34.855 ******** 2026-03-19 00:56:46.661005 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.661017 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.661028 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.661040 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.661051 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.661062 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.661074 | orchestrator | 2026-03-19 00:56:46.661085 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 00:56:46.661096 | orchestrator | Thursday 19 March 2026 00:46:17 +0000 (0:00:01.435) 0:00:36.291 ******** 2026-03-19 00:56:46.661108 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.661119 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.661131 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.661142 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.661153 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.661165 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.661176 | orchestrator | 2026-03-19 00:56:46.661188 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 00:56:46.661208 | orchestrator | Thursday 19 March 2026 00:46:18 +0000 (0:00:00.690) 0:00:36.982 ******** 2026-03-19 00:56:46.661221 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.661233 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.661253 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.661273 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.661294 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.661315 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.661336 | orchestrator | 2026-03-19 00:56:46.661356 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 00:56:46.661377 | orchestrator | Thursday 19 March 2026 00:46:19 +0000 (0:00:01.128) 0:00:38.110 ******** 2026-03-19 00:56:46.661397 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.661418 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.661439 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.661460 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.661480 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.661500 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.661521 | orchestrator | 2026-03-19 00:56:46.661542 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 00:56:46.661650 | orchestrator | Thursday 19 March 2026 00:46:20 +0000 (0:00:01.140) 0:00:39.251 ******** 2026-03-19 00:56:46.661675 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.661686 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.661697 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.661709 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.661721 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.661732 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.661744 | orchestrator | 2026-03-19 00:56:46.661755 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 00:56:46.661766 | orchestrator | Thursday 19 March 2026 00:46:22 +0000 (0:00:01.456) 0:00:40.707 ******** 2026-03-19 00:56:46.661778 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.661789 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.661800 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.661811 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.661822 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.661834 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.661844 | orchestrator | 2026-03-19 00:56:46.661856 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 00:56:46.661865 | orchestrator | Thursday 19 March 2026 00:46:23 +0000 (0:00:01.027) 0:00:41.734 ******** 2026-03-19 00:56:46.661872 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-19 00:56:46.661881 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-19 00:56:46.661892 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-19 00:56:46.661909 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-19 00:56:46.661921 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-19 00:56:46.661931 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-19 00:56:46.661941 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 00:56:46.661952 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-19 00:56:46.661962 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-19 00:56:46.661972 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-19 00:56:46.661983 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-19 00:56:46.661994 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-19 00:56:46.662005 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-19 00:56:46.662070 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-19 00:56:46.662081 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 00:56:46.662091 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-19 00:56:46.662112 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 00:56:46.662122 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-19 00:56:46.662133 | orchestrator | 2026-03-19 00:56:46.662144 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 00:56:46.662155 | orchestrator | Thursday 19 March 2026 00:46:27 +0000 (0:00:04.551) 0:00:46.285 ******** 2026-03-19 00:56:46.662166 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 00:56:46.662178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 00:56:46.662189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 00:56:46.662200 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-19 00:56:46.662219 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-19 00:56:46.662226 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-19 00:56:46.662233 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.662240 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-19 00:56:46.662261 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-19 00:56:46.662268 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-19 00:56:46.662275 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.662281 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 00:56:46.662288 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 00:56:46.662294 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 00:56:46.662301 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.662307 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-19 00:56:46.662314 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-19 00:56:46.662320 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-19 00:56:46.662327 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.662333 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.662340 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-19 00:56:46.662347 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-19 00:56:46.662353 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-19 00:56:46.662360 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.662366 | orchestrator | 2026-03-19 00:56:46.662373 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 00:56:46.662380 | orchestrator | Thursday 19 March 2026 00:46:29 +0000 (0:00:01.492) 0:00:47.778 ******** 2026-03-19 00:56:46.662387 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.662393 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.662400 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.662407 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.662413 | orchestrator | 2026-03-19 00:56:46.662420 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 00:56:46.662427 | orchestrator | Thursday 19 March 2026 00:46:30 +0000 (0:00:01.596) 0:00:49.374 ******** 2026-03-19 00:56:46.662434 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.662446 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.662453 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.662459 | orchestrator | 2026-03-19 00:56:46.662466 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 00:56:46.662473 | orchestrator | Thursday 19 March 2026 00:46:31 +0000 (0:00:00.389) 0:00:49.763 ******** 2026-03-19 00:56:46.662479 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.662486 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.662493 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.662499 | orchestrator | 2026-03-19 00:56:46.662511 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 00:56:46.662518 | orchestrator | Thursday 19 March 2026 00:46:31 +0000 (0:00:00.302) 0:00:50.066 ******** 2026-03-19 00:56:46.662525 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.662531 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.662538 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.662545 | orchestrator | 2026-03-19 00:56:46.662551 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 00:56:46.662558 | orchestrator | Thursday 19 March 2026 00:46:32 +0000 (0:00:00.601) 0:00:50.667 ******** 2026-03-19 00:56:46.662564 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.662571 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.662646 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.662663 | orchestrator | 2026-03-19 00:56:46.662672 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 00:56:46.662679 | orchestrator | Thursday 19 March 2026 00:46:32 +0000 (0:00:00.651) 0:00:51.318 ******** 2026-03-19 00:56:46.662685 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:56:46.662692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:56:46.662699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:56:46.662705 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.662712 | orchestrator | 2026-03-19 00:56:46.662719 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 00:56:46.662725 | orchestrator | Thursday 19 March 2026 00:46:33 +0000 (0:00:00.544) 0:00:51.862 ******** 2026-03-19 00:56:46.662732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:56:46.662739 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:56:46.662745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:56:46.662752 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.662758 | orchestrator | 2026-03-19 00:56:46.662765 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 00:56:46.662771 | orchestrator | Thursday 19 March 2026 00:46:33 +0000 (0:00:00.486) 0:00:52.349 ******** 2026-03-19 00:56:46.662778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:56:46.662784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:56:46.662791 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:56:46.662798 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.662804 | orchestrator | 2026-03-19 00:56:46.662811 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 00:56:46.662817 | orchestrator | Thursday 19 March 2026 00:46:34 +0000 (0:00:00.467) 0:00:52.817 ******** 2026-03-19 00:56:46.662824 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.662831 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.662837 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.662844 | orchestrator | 2026-03-19 00:56:46.662850 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 00:56:46.662857 | orchestrator | Thursday 19 March 2026 00:46:34 +0000 (0:00:00.372) 0:00:53.190 ******** 2026-03-19 00:56:46.662864 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 00:56:46.662871 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-19 00:56:46.662883 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-19 00:56:46.662890 | orchestrator | 2026-03-19 00:56:46.662897 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 00:56:46.662904 | orchestrator | Thursday 19 March 2026 00:46:35 +0000 (0:00:01.007) 0:00:54.197 ******** 2026-03-19 00:56:46.662911 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 00:56:46.662918 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 00:56:46.662925 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 00:56:46.662937 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-19 00:56:46.662944 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 00:56:46.662951 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 00:56:46.662958 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 00:56:46.662965 | orchestrator | 2026-03-19 00:56:46.662971 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 00:56:46.662978 | orchestrator | Thursday 19 March 2026 00:46:36 +0000 (0:00:00.771) 0:00:54.969 ******** 2026-03-19 00:56:46.662984 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 00:56:46.662991 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 00:56:46.662998 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 00:56:46.663004 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-19 00:56:46.663011 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 00:56:46.663018 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 00:56:46.663030 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 00:56:46.663041 | orchestrator | 2026-03-19 00:56:46.663051 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 00:56:46.663071 | orchestrator | Thursday 19 March 2026 00:46:38 +0000 (0:00:01.822) 0:00:56.791 ******** 2026-03-19 00:56:46.663083 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.663095 | orchestrator | 2026-03-19 00:56:46.663107 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 00:56:46.663118 | orchestrator | Thursday 19 March 2026 00:46:39 +0000 (0:00:01.431) 0:00:58.223 ******** 2026-03-19 00:56:46.663126 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.663133 | orchestrator | 2026-03-19 00:56:46.663139 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 00:56:46.663146 | orchestrator | Thursday 19 March 2026 00:46:41 +0000 (0:00:01.251) 0:00:59.475 ******** 2026-03-19 00:56:46.663152 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.663159 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.663166 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.663172 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.663179 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.663186 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.663192 | orchestrator | 2026-03-19 00:56:46.663198 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 00:56:46.663204 | orchestrator | Thursday 19 March 2026 00:46:42 +0000 (0:00:01.352) 0:01:00.828 ******** 2026-03-19 00:56:46.663210 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.663216 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.663222 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.663229 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.663235 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.663241 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.663247 | orchestrator | 2026-03-19 00:56:46.663253 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 00:56:46.663260 | orchestrator | Thursday 19 March 2026 00:46:43 +0000 (0:00:00.936) 0:01:01.764 ******** 2026-03-19 00:56:46.663266 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.663272 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.663285 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.663291 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.663297 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.663304 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.663310 | orchestrator | 2026-03-19 00:56:46.663316 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 00:56:46.663326 | orchestrator | Thursday 19 March 2026 00:46:44 +0000 (0:00:00.993) 0:01:02.758 ******** 2026-03-19 00:56:46.663336 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.663346 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.663357 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.663368 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.663380 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.663388 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.663394 | orchestrator | 2026-03-19 00:56:46.663400 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 00:56:46.663407 | orchestrator | Thursday 19 March 2026 00:46:45 +0000 (0:00:00.897) 0:01:03.655 ******** 2026-03-19 00:56:46.663413 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.663420 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.663431 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.663442 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.663451 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.663463 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.663470 | orchestrator | 2026-03-19 00:56:46.663476 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 00:56:46.663482 | orchestrator | Thursday 19 March 2026 00:46:46 +0000 (0:00:01.618) 0:01:05.274 ******** 2026-03-19 00:56:46.663490 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.663501 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.663512 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.663519 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.663525 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.663531 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.663538 | orchestrator | 2026-03-19 00:56:46.663544 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 00:56:46.663550 | orchestrator | Thursday 19 March 2026 00:46:47 +0000 (0:00:01.153) 0:01:06.427 ******** 2026-03-19 00:56:46.663556 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.663563 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.663569 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.663575 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.663582 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.663588 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.663594 | orchestrator | 2026-03-19 00:56:46.663615 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 00:56:46.663624 | orchestrator | Thursday 19 March 2026 00:46:48 +0000 (0:00:00.926) 0:01:07.354 ******** 2026-03-19 00:56:46.663630 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.663636 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.663642 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.663648 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.663655 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.663661 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.663667 | orchestrator | 2026-03-19 00:56:46.663673 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 00:56:46.663680 | orchestrator | Thursday 19 March 2026 00:46:50 +0000 (0:00:01.209) 0:01:08.564 ******** 2026-03-19 00:56:46.663686 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.663692 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.663698 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.663704 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.663710 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.663716 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.663722 | orchestrator | 2026-03-19 00:56:46.663738 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 00:56:46.663745 | orchestrator | Thursday 19 March 2026 00:46:51 +0000 (0:00:01.773) 0:01:10.338 ******** 2026-03-19 00:56:46.663751 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.663757 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.663764 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.663770 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.663776 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.663782 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.663788 | orchestrator | 2026-03-19 00:56:46.663794 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 00:56:46.663801 | orchestrator | Thursday 19 March 2026 00:46:52 +0000 (0:00:00.832) 0:01:11.171 ******** 2026-03-19 00:56:46.663807 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.663813 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.663819 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.663826 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.663832 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.663838 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.663845 | orchestrator | 2026-03-19 00:56:46.663851 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 00:56:46.663857 | orchestrator | Thursday 19 March 2026 00:46:53 +0000 (0:00:00.924) 0:01:12.095 ******** 2026-03-19 00:56:46.663863 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.663869 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.663876 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.663882 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.663888 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.663895 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.663901 | orchestrator | 2026-03-19 00:56:46.663907 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 00:56:46.663913 | orchestrator | Thursday 19 March 2026 00:46:54 +0000 (0:00:01.061) 0:01:13.156 ******** 2026-03-19 00:56:46.663919 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.663925 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.663932 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.663938 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.663944 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.663950 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.663956 | orchestrator | 2026-03-19 00:56:46.663963 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 00:56:46.663969 | orchestrator | Thursday 19 March 2026 00:46:55 +0000 (0:00:00.774) 0:01:13.931 ******** 2026-03-19 00:56:46.663975 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.663981 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.663987 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.663993 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.663999 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.664005 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.664012 | orchestrator | 2026-03-19 00:56:46.664018 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 00:56:46.664024 | orchestrator | Thursday 19 March 2026 00:46:56 +0000 (0:00:01.082) 0:01:15.013 ******** 2026-03-19 00:56:46.664030 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.664037 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.664043 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.664049 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.664055 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.664061 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.664067 | orchestrator | 2026-03-19 00:56:46.664073 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 00:56:46.664080 | orchestrator | Thursday 19 March 2026 00:46:57 +0000 (0:00:00.746) 0:01:15.760 ******** 2026-03-19 00:56:46.664086 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.664096 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.664102 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.664108 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.664121 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.664132 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.664142 | orchestrator | 2026-03-19 00:56:46.664152 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 00:56:46.664163 | orchestrator | Thursday 19 March 2026 00:46:58 +0000 (0:00:00.830) 0:01:16.591 ******** 2026-03-19 00:56:46.664173 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.664182 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.664191 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.664200 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.664210 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.664220 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.664229 | orchestrator | 2026-03-19 00:56:46.664240 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 00:56:46.664250 | orchestrator | Thursday 19 March 2026 00:46:58 +0000 (0:00:00.640) 0:01:17.231 ******** 2026-03-19 00:56:46.664261 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.664272 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.664283 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.664293 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.664301 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.664307 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.664313 | orchestrator | 2026-03-19 00:56:46.664319 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 00:56:46.664326 | orchestrator | Thursday 19 March 2026 00:46:59 +0000 (0:00:00.970) 0:01:18.202 ******** 2026-03-19 00:56:46.664332 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.664338 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.664344 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.664350 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.664356 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.664362 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.664368 | orchestrator | 2026-03-19 00:56:46.664374 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 00:56:46.664381 | orchestrator | Thursday 19 March 2026 00:47:01 +0000 (0:00:01.508) 0:01:19.710 ******** 2026-03-19 00:56:46.664387 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.664393 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.664399 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.664405 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.664411 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.664417 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.664423 | orchestrator | 2026-03-19 00:56:46.664434 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 00:56:46.664441 | orchestrator | Thursday 19 March 2026 00:47:02 +0000 (0:00:01.702) 0:01:21.413 ******** 2026-03-19 00:56:46.664447 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.664453 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.664459 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.664465 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.664471 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.664477 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.664483 | orchestrator | 2026-03-19 00:56:46.664489 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 00:56:46.664496 | orchestrator | Thursday 19 March 2026 00:47:05 +0000 (0:00:02.465) 0:01:23.878 ******** 2026-03-19 00:56:46.664502 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.664508 | orchestrator | 2026-03-19 00:56:46.664515 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 00:56:46.664526 | orchestrator | Thursday 19 March 2026 00:47:06 +0000 (0:00:01.532) 0:01:25.411 ******** 2026-03-19 00:56:46.664532 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.664539 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.664545 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.664551 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.664557 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.664563 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.664569 | orchestrator | 2026-03-19 00:56:46.664576 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 00:56:46.664582 | orchestrator | Thursday 19 March 2026 00:47:07 +0000 (0:00:00.610) 0:01:26.021 ******** 2026-03-19 00:56:46.664588 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.664594 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.664637 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.664645 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.664651 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.664657 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.664663 | orchestrator | 2026-03-19 00:56:46.664669 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 00:56:46.664676 | orchestrator | Thursday 19 March 2026 00:47:08 +0000 (0:00:01.160) 0:01:27.181 ******** 2026-03-19 00:56:46.664682 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 00:56:46.664688 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 00:56:46.664694 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 00:56:46.664700 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 00:56:46.664706 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 00:56:46.664713 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 00:56:46.664719 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 00:56:46.664725 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 00:56:46.664731 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 00:56:46.664737 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 00:56:46.664749 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 00:56:46.664756 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 00:56:46.664762 | orchestrator | 2026-03-19 00:56:46.664768 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 00:56:46.664775 | orchestrator | Thursday 19 March 2026 00:47:10 +0000 (0:00:01.495) 0:01:28.676 ******** 2026-03-19 00:56:46.664781 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.664787 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.664794 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.664800 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.664806 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.664812 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.664818 | orchestrator | 2026-03-19 00:56:46.664825 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 00:56:46.664831 | orchestrator | Thursday 19 March 2026 00:47:11 +0000 (0:00:01.353) 0:01:30.030 ******** 2026-03-19 00:56:46.664837 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.664843 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.664849 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.664855 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.664861 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.664868 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.664877 | orchestrator | 2026-03-19 00:56:46.664883 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 00:56:46.664890 | orchestrator | Thursday 19 March 2026 00:47:12 +0000 (0:00:00.915) 0:01:30.945 ******** 2026-03-19 00:56:46.664896 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.664901 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.664906 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.664912 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.664917 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.664922 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.664928 | orchestrator | 2026-03-19 00:56:46.664933 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 00:56:46.664939 | orchestrator | Thursday 19 March 2026 00:47:13 +0000 (0:00:01.372) 0:01:32.318 ******** 2026-03-19 00:56:46.664951 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.664957 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.664962 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.664968 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.664973 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.664978 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.664984 | orchestrator | 2026-03-19 00:56:46.664989 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 00:56:46.664994 | orchestrator | Thursday 19 March 2026 00:47:14 +0000 (0:00:01.028) 0:01:33.346 ******** 2026-03-19 00:56:46.665000 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.665006 | orchestrator | 2026-03-19 00:56:46.665012 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 00:56:46.665017 | orchestrator | Thursday 19 March 2026 00:47:16 +0000 (0:00:01.786) 0:01:35.132 ******** 2026-03-19 00:56:46.665023 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.665028 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.665033 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.665039 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.665044 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.665050 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.665055 | orchestrator | 2026-03-19 00:56:46.665060 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 00:56:46.665066 | orchestrator | Thursday 19 March 2026 00:48:10 +0000 (0:00:53.392) 0:02:28.525 ******** 2026-03-19 00:56:46.665071 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 00:56:46.665077 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 00:56:46.665082 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 00:56:46.665088 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.665093 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 00:56:46.665099 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 00:56:46.665105 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 00:56:46.665110 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.665115 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 00:56:46.665121 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 00:56:46.665126 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 00:56:46.665132 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.665137 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 00:56:46.665143 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 00:56:46.665152 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 00:56:46.665158 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.665163 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 00:56:46.665168 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 00:56:46.665174 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 00:56:46.665179 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.665188 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 00:56:46.665193 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 00:56:46.665199 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 00:56:46.665204 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.665210 | orchestrator | 2026-03-19 00:56:46.665215 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 00:56:46.665221 | orchestrator | Thursday 19 March 2026 00:48:10 +0000 (0:00:00.813) 0:02:29.339 ******** 2026-03-19 00:56:46.665226 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.665231 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.665237 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.665242 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.665248 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.665253 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.665258 | orchestrator | 2026-03-19 00:56:46.665264 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 00:56:46.665270 | orchestrator | Thursday 19 March 2026 00:48:11 +0000 (0:00:01.050) 0:02:30.389 ******** 2026-03-19 00:56:46.665275 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.665281 | orchestrator | 2026-03-19 00:56:46.665286 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 00:56:46.665292 | orchestrator | Thursday 19 March 2026 00:48:12 +0000 (0:00:00.123) 0:02:30.513 ******** 2026-03-19 00:56:46.665297 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.665303 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.665308 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.665313 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.665319 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.665324 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.665329 | orchestrator | 2026-03-19 00:56:46.665335 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 00:56:46.665340 | orchestrator | Thursday 19 March 2026 00:48:12 +0000 (0:00:00.667) 0:02:31.181 ******** 2026-03-19 00:56:46.665346 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.665351 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.665356 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.665362 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.665370 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.665376 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.665381 | orchestrator | 2026-03-19 00:56:46.665387 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 00:56:46.665392 | orchestrator | Thursday 19 March 2026 00:48:13 +0000 (0:00:00.837) 0:02:32.018 ******** 2026-03-19 00:56:46.665397 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.665403 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.665408 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.665414 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.665419 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.665424 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.665430 | orchestrator | 2026-03-19 00:56:46.665435 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 00:56:46.665441 | orchestrator | Thursday 19 March 2026 00:48:14 +0000 (0:00:00.754) 0:02:32.773 ******** 2026-03-19 00:56:46.665450 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.665455 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.665461 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.665466 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.665471 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.665477 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.665482 | orchestrator | 2026-03-19 00:56:46.665488 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 00:56:46.665493 | orchestrator | Thursday 19 March 2026 00:48:16 +0000 (0:00:02.281) 0:02:35.055 ******** 2026-03-19 00:56:46.665499 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.665504 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.665509 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.665515 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.665520 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.665525 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.665531 | orchestrator | 2026-03-19 00:56:46.665536 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 00:56:46.665542 | orchestrator | Thursday 19 March 2026 00:48:17 +0000 (0:00:00.825) 0:02:35.881 ******** 2026-03-19 00:56:46.665547 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.665554 | orchestrator | 2026-03-19 00:56:46.665559 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 00:56:46.665565 | orchestrator | Thursday 19 March 2026 00:48:18 +0000 (0:00:01.335) 0:02:37.216 ******** 2026-03-19 00:56:46.665570 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.665575 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.665581 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.665586 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.665592 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.665597 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.665614 | orchestrator | 2026-03-19 00:56:46.665620 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 00:56:46.665625 | orchestrator | Thursday 19 March 2026 00:48:19 +0000 (0:00:00.681) 0:02:37.898 ******** 2026-03-19 00:56:46.665631 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.665636 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.665641 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.665647 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.665652 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.665657 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.665663 | orchestrator | 2026-03-19 00:56:46.665668 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 00:56:46.665674 | orchestrator | Thursday 19 March 2026 00:48:20 +0000 (0:00:00.690) 0:02:38.588 ******** 2026-03-19 00:56:46.665679 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.665684 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.665693 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.665699 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.665704 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.665709 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.665715 | orchestrator | 2026-03-19 00:56:46.665723 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 00:56:46.665732 | orchestrator | Thursday 19 March 2026 00:48:20 +0000 (0:00:00.567) 0:02:39.156 ******** 2026-03-19 00:56:46.665741 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.665754 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.665764 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.665772 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.665781 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.665789 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.665804 | orchestrator | 2026-03-19 00:56:46.665811 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 00:56:46.665819 | orchestrator | Thursday 19 March 2026 00:48:21 +0000 (0:00:00.761) 0:02:39.918 ******** 2026-03-19 00:56:46.665826 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.665833 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.665840 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.665848 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.665855 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.665863 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.665871 | orchestrator | 2026-03-19 00:56:46.665879 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 00:56:46.665888 | orchestrator | Thursday 19 March 2026 00:48:22 +0000 (0:00:00.648) 0:02:40.567 ******** 2026-03-19 00:56:46.665897 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.665907 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.665916 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.665925 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.665934 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.665943 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.665953 | orchestrator | 2026-03-19 00:56:46.665961 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 00:56:46.665970 | orchestrator | Thursday 19 March 2026 00:48:22 +0000 (0:00:00.800) 0:02:41.367 ******** 2026-03-19 00:56:46.665979 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.665986 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.665999 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.666008 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.666059 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.666067 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.666073 | orchestrator | 2026-03-19 00:56:46.666078 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 00:56:46.666084 | orchestrator | Thursday 19 March 2026 00:48:23 +0000 (0:00:00.692) 0:02:42.060 ******** 2026-03-19 00:56:46.666089 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.666095 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.666100 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.666106 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.666111 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.666116 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.666121 | orchestrator | 2026-03-19 00:56:46.666127 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 00:56:46.666132 | orchestrator | Thursday 19 March 2026 00:48:24 +0000 (0:00:00.906) 0:02:42.967 ******** 2026-03-19 00:56:46.666138 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.666143 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.666149 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.666154 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.666159 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.666165 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.666170 | orchestrator | 2026-03-19 00:56:46.666175 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 00:56:46.666181 | orchestrator | Thursday 19 March 2026 00:48:25 +0000 (0:00:01.347) 0:02:44.314 ******** 2026-03-19 00:56:46.666187 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.666193 | orchestrator | 2026-03-19 00:56:46.666198 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 00:56:46.666203 | orchestrator | Thursday 19 March 2026 00:48:26 +0000 (0:00:01.077) 0:02:45.391 ******** 2026-03-19 00:56:46.666209 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-19 00:56:46.666214 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-19 00:56:46.666225 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-19 00:56:46.666230 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-19 00:56:46.666236 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-19 00:56:46.666241 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-19 00:56:46.666246 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-19 00:56:46.666252 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-19 00:56:46.666257 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-19 00:56:46.666263 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-19 00:56:46.666268 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-19 00:56:46.666273 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-19 00:56:46.666279 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-19 00:56:46.666284 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-19 00:56:46.666289 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-19 00:56:46.666295 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-19 00:56:46.666300 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-19 00:56:46.666306 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-19 00:56:46.666321 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-19 00:56:46.666327 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-19 00:56:46.666332 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-19 00:56:46.666337 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-19 00:56:46.666343 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-19 00:56:46.666348 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-19 00:56:46.666353 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-19 00:56:46.666359 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-19 00:56:46.666364 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-19 00:56:46.666370 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-19 00:56:46.666375 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-19 00:56:46.666381 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-19 00:56:46.666386 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-19 00:56:46.666391 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-19 00:56:46.666397 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-19 00:56:46.666402 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-19 00:56:46.666408 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-19 00:56:46.666413 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-19 00:56:46.666418 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-19 00:56:46.666424 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-19 00:56:46.666429 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-19 00:56:46.666435 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-19 00:56:46.666440 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-19 00:56:46.666445 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-19 00:56:46.666454 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-19 00:56:46.666460 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 00:56:46.666466 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-19 00:56:46.666471 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-19 00:56:46.666480 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-19 00:56:46.666486 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-19 00:56:46.666491 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 00:56:46.666496 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 00:56:46.666502 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 00:56:46.666507 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 00:56:46.666512 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-19 00:56:46.666518 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 00:56:46.666523 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 00:56:46.666529 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 00:56:46.666534 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 00:56:46.666539 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 00:56:46.666545 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 00:56:46.666550 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 00:56:46.666555 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 00:56:46.666561 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 00:56:46.666566 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 00:56:46.666572 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 00:56:46.666577 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 00:56:46.666582 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 00:56:46.666588 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 00:56:46.666593 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 00:56:46.666598 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 00:56:46.666623 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 00:56:46.666629 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 00:56:46.666635 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 00:56:46.666640 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 00:56:46.666645 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 00:56:46.666651 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 00:56:46.666656 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 00:56:46.666666 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 00:56:46.666671 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 00:56:46.666677 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 00:56:46.666683 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 00:56:46.666688 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-19 00:56:46.666694 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 00:56:46.666699 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-19 00:56:46.666704 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 00:56:46.666710 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 00:56:46.666720 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-19 00:56:46.666725 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-19 00:56:46.666731 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-19 00:56:46.666736 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-19 00:56:46.666742 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 00:56:46.666747 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-19 00:56:46.666752 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-19 00:56:46.666758 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-19 00:56:46.666763 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-19 00:56:46.666769 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-19 00:56:46.666774 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-19 00:56:46.666779 | orchestrator | 2026-03-19 00:56:46.666785 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 00:56:46.666790 | orchestrator | Thursday 19 March 2026 00:48:34 +0000 (0:00:07.320) 0:02:52.712 ******** 2026-03-19 00:56:46.666796 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.666804 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.666810 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.666816 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.666822 | orchestrator | 2026-03-19 00:56:46.666827 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-19 00:56:46.666833 | orchestrator | Thursday 19 March 2026 00:48:35 +0000 (0:00:00.925) 0:02:53.637 ******** 2026-03-19 00:56:46.666838 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 00:56:46.666844 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 00:56:46.666850 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 00:56:46.666855 | orchestrator | 2026-03-19 00:56:46.666861 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-19 00:56:46.666866 | orchestrator | Thursday 19 March 2026 00:48:35 +0000 (0:00:00.788) 0:02:54.425 ******** 2026-03-19 00:56:46.666872 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 00:56:46.666877 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 00:56:46.666883 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 00:56:46.666889 | orchestrator | 2026-03-19 00:56:46.666894 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 00:56:46.666899 | orchestrator | Thursday 19 March 2026 00:48:37 +0000 (0:00:01.461) 0:02:55.887 ******** 2026-03-19 00:56:46.666905 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.666911 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.666917 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.666922 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.666928 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.666933 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.666939 | orchestrator | 2026-03-19 00:56:46.666944 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 00:56:46.666950 | orchestrator | Thursday 19 March 2026 00:48:38 +0000 (0:00:00.642) 0:02:56.530 ******** 2026-03-19 00:56:46.666955 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.666964 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.666970 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.666975 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.666981 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.666986 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.666992 | orchestrator | 2026-03-19 00:56:46.666998 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 00:56:46.667003 | orchestrator | Thursday 19 March 2026 00:48:38 +0000 (0:00:00.860) 0:02:57.390 ******** 2026-03-19 00:56:46.667009 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.667014 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.667019 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.667025 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.667030 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.667036 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.667041 | orchestrator | 2026-03-19 00:56:46.667051 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 00:56:46.667057 | orchestrator | Thursday 19 March 2026 00:48:39 +0000 (0:00:00.763) 0:02:58.154 ******** 2026-03-19 00:56:46.667062 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.667068 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.667073 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.667078 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.667084 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.667089 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.667095 | orchestrator | 2026-03-19 00:56:46.667100 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 00:56:46.667106 | orchestrator | Thursday 19 March 2026 00:48:40 +0000 (0:00:00.649) 0:02:58.804 ******** 2026-03-19 00:56:46.667111 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.667117 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.667122 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.667127 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.667133 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.667138 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.667144 | orchestrator | 2026-03-19 00:56:46.667149 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 00:56:46.667155 | orchestrator | Thursday 19 March 2026 00:48:41 +0000 (0:00:00.723) 0:02:59.527 ******** 2026-03-19 00:56:46.667160 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.667166 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.667171 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.667177 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.667182 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.667188 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.667193 | orchestrator | 2026-03-19 00:56:46.667199 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 00:56:46.667204 | orchestrator | Thursday 19 March 2026 00:48:41 +0000 (0:00:00.562) 0:03:00.090 ******** 2026-03-19 00:56:46.667210 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.667216 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.667221 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.667226 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.667232 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.667237 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.667246 | orchestrator | 2026-03-19 00:56:46.667252 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 00:56:46.667257 | orchestrator | Thursday 19 March 2026 00:48:42 +0000 (0:00:00.739) 0:03:00.830 ******** 2026-03-19 00:56:46.667263 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.667268 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.667274 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.667283 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.667288 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.667294 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.667304 | orchestrator | 2026-03-19 00:56:46.667318 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 00:56:46.667328 | orchestrator | Thursday 19 March 2026 00:48:42 +0000 (0:00:00.592) 0:03:01.422 ******** 2026-03-19 00:56:46.667338 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.667346 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.667355 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.667363 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.667372 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.667381 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.667390 | orchestrator | 2026-03-19 00:56:46.667400 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 00:56:46.667410 | orchestrator | Thursday 19 March 2026 00:48:45 +0000 (0:00:02.828) 0:03:04.250 ******** 2026-03-19 00:56:46.667419 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.667428 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.667437 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.667444 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.667449 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.667457 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.667468 | orchestrator | 2026-03-19 00:56:46.667481 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 00:56:46.667489 | orchestrator | Thursday 19 March 2026 00:48:46 +0000 (0:00:00.527) 0:03:04.778 ******** 2026-03-19 00:56:46.667499 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.667507 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.667515 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.667524 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.667532 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.667539 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.667547 | orchestrator | 2026-03-19 00:56:46.667555 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 00:56:46.667563 | orchestrator | Thursday 19 March 2026 00:48:47 +0000 (0:00:00.704) 0:03:05.483 ******** 2026-03-19 00:56:46.667572 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.667580 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.667588 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.667597 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.667619 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.667628 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.667637 | orchestrator | 2026-03-19 00:56:46.667646 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 00:56:46.667655 | orchestrator | Thursday 19 March 2026 00:48:47 +0000 (0:00:00.719) 0:03:06.203 ******** 2026-03-19 00:56:46.667664 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 00:56:46.667673 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 00:56:46.667682 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 00:56:46.667691 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.667707 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.667718 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.667727 | orchestrator | 2026-03-19 00:56:46.667736 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 00:56:46.667750 | orchestrator | Thursday 19 March 2026 00:48:48 +0000 (0:00:01.044) 0:03:07.247 ******** 2026-03-19 00:56:46.667763 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-19 00:56:46.667784 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-19 00:56:46.667795 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.667804 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-19 00:56:46.667814 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-19 00:56:46.667834 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.667844 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.667853 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.667861 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.667866 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-19 00:56:46.667872 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-19 00:56:46.667877 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.667883 | orchestrator | 2026-03-19 00:56:46.667888 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 00:56:46.667894 | orchestrator | Thursday 19 March 2026 00:48:49 +0000 (0:00:00.911) 0:03:08.158 ******** 2026-03-19 00:56:46.667899 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.667904 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.667910 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.667915 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.667920 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.667926 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.667931 | orchestrator | 2026-03-19 00:56:46.667936 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 00:56:46.667942 | orchestrator | Thursday 19 March 2026 00:48:50 +0000 (0:00:00.655) 0:03:08.814 ******** 2026-03-19 00:56:46.667947 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.667952 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.667958 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.667963 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.667968 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.667974 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.667979 | orchestrator | 2026-03-19 00:56:46.667985 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 00:56:46.667990 | orchestrator | Thursday 19 March 2026 00:48:50 +0000 (0:00:00.553) 0:03:09.367 ******** 2026-03-19 00:56:46.667995 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.668001 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.668010 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.668016 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.668021 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.668026 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.668032 | orchestrator | 2026-03-19 00:56:46.668037 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 00:56:46.668042 | orchestrator | Thursday 19 March 2026 00:48:51 +0000 (0:00:00.823) 0:03:10.191 ******** 2026-03-19 00:56:46.668048 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.668053 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.668058 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.668064 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.668069 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.668075 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.668080 | orchestrator | 2026-03-19 00:56:46.668086 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 00:56:46.668096 | orchestrator | Thursday 19 March 2026 00:48:52 +0000 (0:00:00.586) 0:03:10.777 ******** 2026-03-19 00:56:46.668102 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.668107 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.668113 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.668118 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.668123 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.668128 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.668134 | orchestrator | 2026-03-19 00:56:46.668139 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 00:56:46.668144 | orchestrator | Thursday 19 March 2026 00:48:53 +0000 (0:00:00.755) 0:03:11.533 ******** 2026-03-19 00:56:46.668150 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.668155 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.668161 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.668166 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.668171 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.668177 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.668182 | orchestrator | 2026-03-19 00:56:46.668187 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 00:56:46.668193 | orchestrator | Thursday 19 March 2026 00:48:53 +0000 (0:00:00.586) 0:03:12.120 ******** 2026-03-19 00:56:46.668198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:56:46.668203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:56:46.668209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:56:46.668214 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.668220 | orchestrator | 2026-03-19 00:56:46.668225 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 00:56:46.668230 | orchestrator | Thursday 19 March 2026 00:48:54 +0000 (0:00:00.632) 0:03:12.752 ******** 2026-03-19 00:56:46.668236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:56:46.668241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:56:46.668247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:56:46.668252 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.668257 | orchestrator | 2026-03-19 00:56:46.668263 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 00:56:46.668272 | orchestrator | Thursday 19 March 2026 00:48:54 +0000 (0:00:00.625) 0:03:13.377 ******** 2026-03-19 00:56:46.668277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:56:46.668283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:56:46.668288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:56:46.668293 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.668299 | orchestrator | 2026-03-19 00:56:46.668304 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 00:56:46.668314 | orchestrator | Thursday 19 March 2026 00:48:55 +0000 (0:00:00.625) 0:03:14.003 ******** 2026-03-19 00:56:46.668320 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.668325 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.668330 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.668336 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.668341 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.668346 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.668352 | orchestrator | 2026-03-19 00:56:46.668361 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 00:56:46.668369 | orchestrator | Thursday 19 March 2026 00:48:56 +0000 (0:00:00.524) 0:03:14.527 ******** 2026-03-19 00:56:46.668376 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 00:56:46.668384 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-19 00:56:46.668391 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-19 00:56:46.668399 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-19 00:56:46.668409 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.668422 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-19 00:56:46.668431 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.668440 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-19 00:56:46.668448 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.668456 | orchestrator | 2026-03-19 00:56:46.668465 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 00:56:46.668474 | orchestrator | Thursday 19 March 2026 00:48:57 +0000 (0:00:01.827) 0:03:16.355 ******** 2026-03-19 00:56:46.668483 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.668492 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.668502 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.668511 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.668520 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.668529 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.668538 | orchestrator | 2026-03-19 00:56:46.668547 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-19 00:56:46.668556 | orchestrator | Thursday 19 March 2026 00:49:00 +0000 (0:00:02.500) 0:03:18.855 ******** 2026-03-19 00:56:46.668565 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.668574 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.668583 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.668592 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.668637 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.668648 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.668657 | orchestrator | 2026-03-19 00:56:46.668666 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-19 00:56:46.668675 | orchestrator | Thursday 19 March 2026 00:49:01 +0000 (0:00:01.328) 0:03:20.183 ******** 2026-03-19 00:56:46.668684 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.668692 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.668701 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.668710 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.668719 | orchestrator | 2026-03-19 00:56:46.668729 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-19 00:56:46.668745 | orchestrator | Thursday 19 March 2026 00:49:02 +0000 (0:00:00.884) 0:03:21.068 ******** 2026-03-19 00:56:46.668754 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.668764 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.668773 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.668783 | orchestrator | 2026-03-19 00:56:46.668792 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-19 00:56:46.668799 | orchestrator | Thursday 19 March 2026 00:49:02 +0000 (0:00:00.340) 0:03:21.408 ******** 2026-03-19 00:56:46.668805 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.668817 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.668823 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.668828 | orchestrator | 2026-03-19 00:56:46.668834 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-19 00:56:46.668839 | orchestrator | Thursday 19 March 2026 00:49:04 +0000 (0:00:01.155) 0:03:22.564 ******** 2026-03-19 00:56:46.668844 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 00:56:46.668850 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 00:56:46.668855 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 00:56:46.668860 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.668866 | orchestrator | 2026-03-19 00:56:46.668871 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-19 00:56:46.668877 | orchestrator | Thursday 19 March 2026 00:49:04 +0000 (0:00:00.729) 0:03:23.293 ******** 2026-03-19 00:56:46.668882 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.668888 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.668893 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.668898 | orchestrator | 2026-03-19 00:56:46.668905 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-19 00:56:46.668914 | orchestrator | Thursday 19 March 2026 00:49:05 +0000 (0:00:00.393) 0:03:23.687 ******** 2026-03-19 00:56:46.668927 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.668935 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.668942 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.668950 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.668958 | orchestrator | 2026-03-19 00:56:46.668964 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-19 00:56:46.668977 | orchestrator | Thursday 19 March 2026 00:49:06 +0000 (0:00:01.063) 0:03:24.750 ******** 2026-03-19 00:56:46.668985 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:56:46.668994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:56:46.669002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:56:46.669011 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669019 | orchestrator | 2026-03-19 00:56:46.669027 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-19 00:56:46.669032 | orchestrator | Thursday 19 March 2026 00:49:06 +0000 (0:00:00.329) 0:03:25.079 ******** 2026-03-19 00:56:46.669038 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669046 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.669054 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.669062 | orchestrator | 2026-03-19 00:56:46.669069 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-19 00:56:46.669077 | orchestrator | Thursday 19 March 2026 00:49:07 +0000 (0:00:00.415) 0:03:25.495 ******** 2026-03-19 00:56:46.669086 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669094 | orchestrator | 2026-03-19 00:56:46.669103 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-19 00:56:46.669111 | orchestrator | Thursday 19 March 2026 00:49:07 +0000 (0:00:00.170) 0:03:25.666 ******** 2026-03-19 00:56:46.669119 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669127 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.669132 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.669137 | orchestrator | 2026-03-19 00:56:46.669141 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-19 00:56:46.669150 | orchestrator | Thursday 19 March 2026 00:49:07 +0000 (0:00:00.315) 0:03:25.982 ******** 2026-03-19 00:56:46.669158 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669165 | orchestrator | 2026-03-19 00:56:46.669173 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-19 00:56:46.669181 | orchestrator | Thursday 19 March 2026 00:49:07 +0000 (0:00:00.182) 0:03:26.164 ******** 2026-03-19 00:56:46.669196 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669204 | orchestrator | 2026-03-19 00:56:46.669212 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-19 00:56:46.669220 | orchestrator | Thursday 19 March 2026 00:49:07 +0000 (0:00:00.237) 0:03:26.402 ******** 2026-03-19 00:56:46.669229 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669234 | orchestrator | 2026-03-19 00:56:46.669238 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-19 00:56:46.669243 | orchestrator | Thursday 19 March 2026 00:49:08 +0000 (0:00:00.111) 0:03:26.514 ******** 2026-03-19 00:56:46.669251 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669259 | orchestrator | 2026-03-19 00:56:46.669267 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-19 00:56:46.669275 | orchestrator | Thursday 19 March 2026 00:49:08 +0000 (0:00:00.219) 0:03:26.733 ******** 2026-03-19 00:56:46.669283 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669291 | orchestrator | 2026-03-19 00:56:46.669299 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-19 00:56:46.669307 | orchestrator | Thursday 19 March 2026 00:49:08 +0000 (0:00:00.208) 0:03:26.942 ******** 2026-03-19 00:56:46.669315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:56:46.669323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:56:46.669332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:56:46.669340 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669348 | orchestrator | 2026-03-19 00:56:46.669356 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-19 00:56:46.669370 | orchestrator | Thursday 19 March 2026 00:49:09 +0000 (0:00:00.564) 0:03:27.506 ******** 2026-03-19 00:56:46.669378 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669386 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.669395 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.669403 | orchestrator | 2026-03-19 00:56:46.669411 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-19 00:56:46.669420 | orchestrator | Thursday 19 March 2026 00:49:09 +0000 (0:00:00.432) 0:03:27.939 ******** 2026-03-19 00:56:46.669427 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669435 | orchestrator | 2026-03-19 00:56:46.669443 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-19 00:56:46.669452 | orchestrator | Thursday 19 March 2026 00:49:09 +0000 (0:00:00.209) 0:03:28.148 ******** 2026-03-19 00:56:46.669460 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669467 | orchestrator | 2026-03-19 00:56:46.669475 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-19 00:56:46.669484 | orchestrator | Thursday 19 March 2026 00:49:09 +0000 (0:00:00.190) 0:03:28.339 ******** 2026-03-19 00:56:46.669492 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.669500 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.669507 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.669515 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.669523 | orchestrator | 2026-03-19 00:56:46.669532 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-19 00:56:46.669539 | orchestrator | Thursday 19 March 2026 00:49:10 +0000 (0:00:01.017) 0:03:29.356 ******** 2026-03-19 00:56:46.669547 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.669555 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.669563 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.669571 | orchestrator | 2026-03-19 00:56:46.669580 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-19 00:56:46.669588 | orchestrator | Thursday 19 March 2026 00:49:11 +0000 (0:00:00.354) 0:03:29.711 ******** 2026-03-19 00:56:46.669596 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.669622 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.669630 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.669639 | orchestrator | 2026-03-19 00:56:46.669651 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-19 00:56:46.669659 | orchestrator | Thursday 19 March 2026 00:49:12 +0000 (0:00:01.143) 0:03:30.854 ******** 2026-03-19 00:56:46.669666 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:56:46.669674 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:56:46.669682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:56:46.669690 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669698 | orchestrator | 2026-03-19 00:56:46.669706 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-19 00:56:46.669714 | orchestrator | Thursday 19 March 2026 00:49:13 +0000 (0:00:00.811) 0:03:31.666 ******** 2026-03-19 00:56:46.669722 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.669730 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.669738 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.669746 | orchestrator | 2026-03-19 00:56:46.669755 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-19 00:56:46.669763 | orchestrator | Thursday 19 March 2026 00:49:13 +0000 (0:00:00.382) 0:03:32.049 ******** 2026-03-19 00:56:46.669771 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.669778 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.669786 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.669794 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.669802 | orchestrator | 2026-03-19 00:56:46.669810 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-19 00:56:46.669818 | orchestrator | Thursday 19 March 2026 00:49:14 +0000 (0:00:00.997) 0:03:33.047 ******** 2026-03-19 00:56:46.669826 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.669834 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.669842 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.669850 | orchestrator | 2026-03-19 00:56:46.669855 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-19 00:56:46.669860 | orchestrator | Thursday 19 March 2026 00:49:14 +0000 (0:00:00.273) 0:03:33.320 ******** 2026-03-19 00:56:46.669865 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.669869 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.669874 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.669879 | orchestrator | 2026-03-19 00:56:46.669883 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-19 00:56:46.669888 | orchestrator | Thursday 19 March 2026 00:49:16 +0000 (0:00:01.190) 0:03:34.511 ******** 2026-03-19 00:56:46.669893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:56:46.669897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:56:46.669902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:56:46.669907 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669912 | orchestrator | 2026-03-19 00:56:46.669916 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-19 00:56:46.669921 | orchestrator | Thursday 19 March 2026 00:49:16 +0000 (0:00:00.574) 0:03:35.085 ******** 2026-03-19 00:56:46.669926 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.669931 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.669935 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.669940 | orchestrator | 2026-03-19 00:56:46.669945 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-19 00:56:46.669950 | orchestrator | Thursday 19 March 2026 00:49:16 +0000 (0:00:00.296) 0:03:35.382 ******** 2026-03-19 00:56:46.669954 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.669959 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.669964 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.669973 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.669978 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.669987 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.669992 | orchestrator | 2026-03-19 00:56:46.669997 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-19 00:56:46.670001 | orchestrator | Thursday 19 March 2026 00:49:17 +0000 (0:00:00.540) 0:03:35.922 ******** 2026-03-19 00:56:46.670006 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.670089 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.670097 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.670102 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.670107 | orchestrator | 2026-03-19 00:56:46.670111 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-19 00:56:46.670116 | orchestrator | Thursday 19 March 2026 00:49:18 +0000 (0:00:00.977) 0:03:36.900 ******** 2026-03-19 00:56:46.670121 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.670126 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.670130 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.670135 | orchestrator | 2026-03-19 00:56:46.670140 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-19 00:56:46.670145 | orchestrator | Thursday 19 March 2026 00:49:18 +0000 (0:00:00.300) 0:03:37.201 ******** 2026-03-19 00:56:46.670150 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.670154 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.670159 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.670164 | orchestrator | 2026-03-19 00:56:46.670169 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-19 00:56:46.670173 | orchestrator | Thursday 19 March 2026 00:49:20 +0000 (0:00:01.363) 0:03:38.565 ******** 2026-03-19 00:56:46.670178 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 00:56:46.670183 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 00:56:46.670188 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 00:56:46.670193 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.670197 | orchestrator | 2026-03-19 00:56:46.670202 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-19 00:56:46.670207 | orchestrator | Thursday 19 March 2026 00:49:20 +0000 (0:00:00.636) 0:03:39.201 ******** 2026-03-19 00:56:46.670212 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.670222 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.670226 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.670231 | orchestrator | 2026-03-19 00:56:46.670236 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-19 00:56:46.670241 | orchestrator | 2026-03-19 00:56:46.670246 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 00:56:46.670250 | orchestrator | Thursday 19 March 2026 00:49:21 +0000 (0:00:00.486) 0:03:39.687 ******** 2026-03-19 00:56:46.670255 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.670261 | orchestrator | 2026-03-19 00:56:46.670265 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 00:56:46.670270 | orchestrator | Thursday 19 March 2026 00:49:21 +0000 (0:00:00.654) 0:03:40.342 ******** 2026-03-19 00:56:46.670275 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.670280 | orchestrator | 2026-03-19 00:56:46.670285 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 00:56:46.670290 | orchestrator | Thursday 19 March 2026 00:49:22 +0000 (0:00:00.456) 0:03:40.798 ******** 2026-03-19 00:56:46.670294 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.670299 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.670308 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.670312 | orchestrator | 2026-03-19 00:56:46.670317 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 00:56:46.670323 | orchestrator | Thursday 19 March 2026 00:49:23 +0000 (0:00:00.714) 0:03:41.513 ******** 2026-03-19 00:56:46.670331 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.670339 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.670347 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.670355 | orchestrator | 2026-03-19 00:56:46.670363 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 00:56:46.670371 | orchestrator | Thursday 19 March 2026 00:49:23 +0000 (0:00:00.448) 0:03:41.962 ******** 2026-03-19 00:56:46.670379 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.670387 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.670396 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.670402 | orchestrator | 2026-03-19 00:56:46.670406 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 00:56:46.670411 | orchestrator | Thursday 19 March 2026 00:49:23 +0000 (0:00:00.254) 0:03:42.216 ******** 2026-03-19 00:56:46.670416 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.670421 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.670425 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.670430 | orchestrator | 2026-03-19 00:56:46.670435 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 00:56:46.670440 | orchestrator | Thursday 19 March 2026 00:49:24 +0000 (0:00:00.274) 0:03:42.490 ******** 2026-03-19 00:56:46.670444 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.670449 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.670454 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.670459 | orchestrator | 2026-03-19 00:56:46.670463 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 00:56:46.670468 | orchestrator | Thursday 19 March 2026 00:49:24 +0000 (0:00:00.764) 0:03:43.255 ******** 2026-03-19 00:56:46.670473 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.670478 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.670482 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.670487 | orchestrator | 2026-03-19 00:56:46.670492 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 00:56:46.670497 | orchestrator | Thursday 19 March 2026 00:49:25 +0000 (0:00:00.281) 0:03:43.537 ******** 2026-03-19 00:56:46.670523 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.670529 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.670534 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.670539 | orchestrator | 2026-03-19 00:56:46.670544 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 00:56:46.670548 | orchestrator | Thursday 19 March 2026 00:49:25 +0000 (0:00:00.460) 0:03:43.997 ******** 2026-03-19 00:56:46.670553 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.670558 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.670563 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.670567 | orchestrator | 2026-03-19 00:56:46.670572 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 00:56:46.670577 | orchestrator | Thursday 19 March 2026 00:49:26 +0000 (0:00:00.733) 0:03:44.730 ******** 2026-03-19 00:56:46.670582 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.670587 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.670591 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.670596 | orchestrator | 2026-03-19 00:56:46.670616 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 00:56:46.670623 | orchestrator | Thursday 19 March 2026 00:49:26 +0000 (0:00:00.702) 0:03:45.433 ******** 2026-03-19 00:56:46.670628 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.670633 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.670638 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.670642 | orchestrator | 2026-03-19 00:56:46.670651 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 00:56:46.670656 | orchestrator | Thursday 19 March 2026 00:49:27 +0000 (0:00:00.260) 0:03:45.693 ******** 2026-03-19 00:56:46.670661 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.670666 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.670671 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.670679 | orchestrator | 2026-03-19 00:56:46.670687 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 00:56:46.670695 | orchestrator | Thursday 19 March 2026 00:49:27 +0000 (0:00:00.452) 0:03:46.146 ******** 2026-03-19 00:56:46.670703 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.670711 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.670719 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.670728 | orchestrator | 2026-03-19 00:56:46.670733 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 00:56:46.670741 | orchestrator | Thursday 19 March 2026 00:49:27 +0000 (0:00:00.268) 0:03:46.415 ******** 2026-03-19 00:56:46.670746 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.670751 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.670756 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.670761 | orchestrator | 2026-03-19 00:56:46.670765 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 00:56:46.670770 | orchestrator | Thursday 19 March 2026 00:49:28 +0000 (0:00:00.272) 0:03:46.687 ******** 2026-03-19 00:56:46.670775 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.670780 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.670785 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.670789 | orchestrator | 2026-03-19 00:56:46.670794 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 00:56:46.670799 | orchestrator | Thursday 19 March 2026 00:49:28 +0000 (0:00:00.274) 0:03:46.962 ******** 2026-03-19 00:56:46.670804 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.670808 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.670813 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.670818 | orchestrator | 2026-03-19 00:56:46.670823 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 00:56:46.670828 | orchestrator | Thursday 19 March 2026 00:49:28 +0000 (0:00:00.438) 0:03:47.400 ******** 2026-03-19 00:56:46.670832 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.670837 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.670842 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.670850 | orchestrator | 2026-03-19 00:56:46.670858 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 00:56:46.670865 | orchestrator | Thursday 19 March 2026 00:49:29 +0000 (0:00:00.295) 0:03:47.696 ******** 2026-03-19 00:56:46.670873 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.670881 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.670889 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.670897 | orchestrator | 2026-03-19 00:56:46.670906 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 00:56:46.670914 | orchestrator | Thursday 19 March 2026 00:49:29 +0000 (0:00:00.301) 0:03:47.998 ******** 2026-03-19 00:56:46.670923 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.670929 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.670933 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.670938 | orchestrator | 2026-03-19 00:56:46.670943 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 00:56:46.670948 | orchestrator | Thursday 19 March 2026 00:49:29 +0000 (0:00:00.304) 0:03:48.303 ******** 2026-03-19 00:56:46.670952 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.670957 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.670962 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.670966 | orchestrator | 2026-03-19 00:56:46.670971 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-19 00:56:46.670980 | orchestrator | Thursday 19 March 2026 00:49:30 +0000 (0:00:00.867) 0:03:49.170 ******** 2026-03-19 00:56:46.670985 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.670990 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.670995 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.671000 | orchestrator | 2026-03-19 00:56:46.671004 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-19 00:56:46.671009 | orchestrator | Thursday 19 March 2026 00:49:31 +0000 (0:00:00.316) 0:03:49.487 ******** 2026-03-19 00:56:46.671014 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.671019 | orchestrator | 2026-03-19 00:56:46.671024 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-19 00:56:46.671029 | orchestrator | Thursday 19 March 2026 00:49:31 +0000 (0:00:00.483) 0:03:49.970 ******** 2026-03-19 00:56:46.671033 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.671038 | orchestrator | 2026-03-19 00:56:46.671066 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-19 00:56:46.671072 | orchestrator | Thursday 19 March 2026 00:49:31 +0000 (0:00:00.296) 0:03:50.267 ******** 2026-03-19 00:56:46.671077 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-19 00:56:46.671082 | orchestrator | 2026-03-19 00:56:46.671086 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-19 00:56:46.671091 | orchestrator | Thursday 19 March 2026 00:49:32 +0000 (0:00:00.988) 0:03:51.256 ******** 2026-03-19 00:56:46.671096 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.671101 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.671106 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.671111 | orchestrator | 2026-03-19 00:56:46.671115 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-19 00:56:46.671120 | orchestrator | Thursday 19 March 2026 00:49:33 +0000 (0:00:00.339) 0:03:51.595 ******** 2026-03-19 00:56:46.671125 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.671130 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.671135 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.671139 | orchestrator | 2026-03-19 00:56:46.671144 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-19 00:56:46.671149 | orchestrator | Thursday 19 March 2026 00:49:33 +0000 (0:00:00.313) 0:03:51.908 ******** 2026-03-19 00:56:46.671154 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.671159 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.671163 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.671168 | orchestrator | 2026-03-19 00:56:46.671173 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-19 00:56:46.671178 | orchestrator | Thursday 19 March 2026 00:49:35 +0000 (0:00:01.705) 0:03:53.614 ******** 2026-03-19 00:56:46.671183 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.671187 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.671192 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.671197 | orchestrator | 2026-03-19 00:56:46.671202 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-19 00:56:46.671207 | orchestrator | Thursday 19 March 2026 00:49:36 +0000 (0:00:00.887) 0:03:54.502 ******** 2026-03-19 00:56:46.671212 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.671216 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.671221 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.671226 | orchestrator | 2026-03-19 00:56:46.671238 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-19 00:56:46.671243 | orchestrator | Thursday 19 March 2026 00:49:36 +0000 (0:00:00.747) 0:03:55.250 ******** 2026-03-19 00:56:46.671248 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.671253 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.671258 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.671262 | orchestrator | 2026-03-19 00:56:46.671267 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-19 00:56:46.671275 | orchestrator | Thursday 19 March 2026 00:49:37 +0000 (0:00:00.577) 0:03:55.827 ******** 2026-03-19 00:56:46.671280 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.671288 | orchestrator | 2026-03-19 00:56:46.671299 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-19 00:56:46.671310 | orchestrator | Thursday 19 March 2026 00:49:38 +0000 (0:00:01.331) 0:03:57.158 ******** 2026-03-19 00:56:46.671318 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.671326 | orchestrator | 2026-03-19 00:56:46.671334 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-19 00:56:46.671343 | orchestrator | Thursday 19 March 2026 00:49:39 +0000 (0:00:00.636) 0:03:57.795 ******** 2026-03-19 00:56:46.671351 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 00:56:46.671360 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:56:46.671369 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:56:46.671375 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 00:56:46.671380 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-19 00:56:46.671385 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 00:56:46.671390 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 00:56:46.671395 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-19 00:56:46.671400 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-19 00:56:46.671404 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-19 00:56:46.671409 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 00:56:46.671414 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-19 00:56:46.671419 | orchestrator | 2026-03-19 00:56:46.671423 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-19 00:56:46.671428 | orchestrator | Thursday 19 March 2026 00:49:43 +0000 (0:00:03.834) 0:04:01.629 ******** 2026-03-19 00:56:46.671433 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.671438 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.671443 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.671447 | orchestrator | 2026-03-19 00:56:46.671452 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-19 00:56:46.671457 | orchestrator | Thursday 19 March 2026 00:49:45 +0000 (0:00:01.845) 0:04:03.475 ******** 2026-03-19 00:56:46.671462 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.671466 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.671471 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.671476 | orchestrator | 2026-03-19 00:56:46.671481 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-19 00:56:46.671486 | orchestrator | Thursday 19 March 2026 00:49:45 +0000 (0:00:00.413) 0:04:03.889 ******** 2026-03-19 00:56:46.671494 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.671506 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.671515 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.671522 | orchestrator | 2026-03-19 00:56:46.671530 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-19 00:56:46.671538 | orchestrator | Thursday 19 March 2026 00:49:46 +0000 (0:00:00.639) 0:04:04.529 ******** 2026-03-19 00:56:46.671545 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.671581 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.671590 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.671598 | orchestrator | 2026-03-19 00:56:46.671618 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-19 00:56:46.671626 | orchestrator | Thursday 19 March 2026 00:49:49 +0000 (0:00:03.354) 0:04:07.883 ******** 2026-03-19 00:56:46.671633 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.671640 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.671648 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.671662 | orchestrator | 2026-03-19 00:56:46.671669 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-19 00:56:46.671677 | orchestrator | Thursday 19 March 2026 00:49:51 +0000 (0:00:01.592) 0:04:09.476 ******** 2026-03-19 00:56:46.671684 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.671693 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.671701 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.671706 | orchestrator | 2026-03-19 00:56:46.671711 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-19 00:56:46.671716 | orchestrator | Thursday 19 March 2026 00:49:51 +0000 (0:00:00.341) 0:04:09.817 ******** 2026-03-19 00:56:46.671721 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.671726 | orchestrator | 2026-03-19 00:56:46.671731 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-19 00:56:46.671735 | orchestrator | Thursday 19 March 2026 00:49:51 +0000 (0:00:00.475) 0:04:10.293 ******** 2026-03-19 00:56:46.671741 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.671749 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.671757 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.671765 | orchestrator | 2026-03-19 00:56:46.671774 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-19 00:56:46.671781 | orchestrator | Thursday 19 March 2026 00:49:52 +0000 (0:00:00.547) 0:04:10.841 ******** 2026-03-19 00:56:46.671794 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.671803 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.671811 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.671818 | orchestrator | 2026-03-19 00:56:46.671826 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-19 00:56:46.671839 | orchestrator | Thursday 19 March 2026 00:49:53 +0000 (0:00:00.679) 0:04:11.520 ******** 2026-03-19 00:56:46.671848 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.671855 | orchestrator | 2026-03-19 00:56:46.671860 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-19 00:56:46.671865 | orchestrator | Thursday 19 March 2026 00:49:55 +0000 (0:00:02.110) 0:04:13.631 ******** 2026-03-19 00:56:46.671870 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.671875 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.671879 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.671884 | orchestrator | 2026-03-19 00:56:46.671889 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-19 00:56:46.671894 | orchestrator | Thursday 19 March 2026 00:49:58 +0000 (0:00:02.984) 0:04:16.615 ******** 2026-03-19 00:56:46.671898 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.671903 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.671908 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.671913 | orchestrator | 2026-03-19 00:56:46.671917 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-19 00:56:46.671922 | orchestrator | Thursday 19 March 2026 00:50:00 +0000 (0:00:02.077) 0:04:18.693 ******** 2026-03-19 00:56:46.671927 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.671932 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.671936 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.671941 | orchestrator | 2026-03-19 00:56:46.671946 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-19 00:56:46.671950 | orchestrator | Thursday 19 March 2026 00:50:02 +0000 (0:00:02.416) 0:04:21.110 ******** 2026-03-19 00:56:46.671968 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.671973 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.671978 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.671983 | orchestrator | 2026-03-19 00:56:46.671988 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-19 00:56:46.671995 | orchestrator | Thursday 19 March 2026 00:50:05 +0000 (0:00:02.792) 0:04:23.902 ******** 2026-03-19 00:56:46.672011 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.672020 | orchestrator | 2026-03-19 00:56:46.672029 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-19 00:56:46.672038 | orchestrator | Thursday 19 March 2026 00:50:06 +0000 (0:00:00.743) 0:04:24.647 ******** 2026-03-19 00:56:46.672047 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-19 00:56:46.672056 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.672065 | orchestrator | 2026-03-19 00:56:46.672074 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-19 00:56:46.672082 | orchestrator | Thursday 19 March 2026 00:50:28 +0000 (0:00:22.231) 0:04:46.878 ******** 2026-03-19 00:56:46.672091 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.672099 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.672107 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.672116 | orchestrator | 2026-03-19 00:56:46.672125 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-19 00:56:46.672133 | orchestrator | Thursday 19 March 2026 00:50:37 +0000 (0:00:09.507) 0:04:56.385 ******** 2026-03-19 00:56:46.672142 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.672150 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.672158 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.672167 | orchestrator | 2026-03-19 00:56:46.672176 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-19 00:56:46.672220 | orchestrator | Thursday 19 March 2026 00:50:38 +0000 (0:00:00.287) 0:04:56.673 ******** 2026-03-19 00:56:46.672230 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__143ee4c2e6cb5cd49c3c24eb85ea631fd3963603'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-19 00:56:46.672240 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__143ee4c2e6cb5cd49c3c24eb85ea631fd3963603'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-19 00:56:46.672250 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__143ee4c2e6cb5cd49c3c24eb85ea631fd3963603'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-19 00:56:46.672263 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__143ee4c2e6cb5cd49c3c24eb85ea631fd3963603'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-19 00:56:46.672272 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__143ee4c2e6cb5cd49c3c24eb85ea631fd3963603'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-19 00:56:46.672281 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__143ee4c2e6cb5cd49c3c24eb85ea631fd3963603'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__143ee4c2e6cb5cd49c3c24eb85ea631fd3963603'}])  2026-03-19 00:56:46.672296 | orchestrator | 2026-03-19 00:56:46.672305 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-19 00:56:46.672313 | orchestrator | Thursday 19 March 2026 00:50:53 +0000 (0:00:15.443) 0:05:12.116 ******** 2026-03-19 00:56:46.672321 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.672329 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.672337 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.672345 | orchestrator | 2026-03-19 00:56:46.672353 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-19 00:56:46.672361 | orchestrator | Thursday 19 March 2026 00:50:53 +0000 (0:00:00.294) 0:05:12.411 ******** 2026-03-19 00:56:46.672369 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.672377 | orchestrator | 2026-03-19 00:56:46.672385 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-19 00:56:46.672393 | orchestrator | Thursday 19 March 2026 00:50:54 +0000 (0:00:00.598) 0:05:13.009 ******** 2026-03-19 00:56:46.672401 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.672409 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.672417 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.672426 | orchestrator | 2026-03-19 00:56:46.672434 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-19 00:56:46.672442 | orchestrator | Thursday 19 March 2026 00:50:54 +0000 (0:00:00.263) 0:05:13.272 ******** 2026-03-19 00:56:46.672450 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.672458 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.672466 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.672474 | orchestrator | 2026-03-19 00:56:46.672483 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-19 00:56:46.672491 | orchestrator | Thursday 19 March 2026 00:50:55 +0000 (0:00:00.280) 0:05:13.553 ******** 2026-03-19 00:56:46.672499 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 00:56:46.672507 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 00:56:46.672515 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 00:56:46.672524 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.672532 | orchestrator | 2026-03-19 00:56:46.672540 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-19 00:56:46.672548 | orchestrator | Thursday 19 March 2026 00:50:55 +0000 (0:00:00.533) 0:05:14.087 ******** 2026-03-19 00:56:46.672556 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.672564 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.672592 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.672636 | orchestrator | 2026-03-19 00:56:46.672645 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-19 00:56:46.672654 | orchestrator | 2026-03-19 00:56:46.672662 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 00:56:46.672670 | orchestrator | Thursday 19 March 2026 00:50:56 +0000 (0:00:00.631) 0:05:14.718 ******** 2026-03-19 00:56:46.672678 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.672687 | orchestrator | 2026-03-19 00:56:46.672694 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 00:56:46.672702 | orchestrator | Thursday 19 March 2026 00:50:56 +0000 (0:00:00.399) 0:05:15.118 ******** 2026-03-19 00:56:46.672709 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.672717 | orchestrator | 2026-03-19 00:56:46.672725 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 00:56:46.672740 | orchestrator | Thursday 19 March 2026 00:50:57 +0000 (0:00:00.366) 0:05:15.485 ******** 2026-03-19 00:56:46.672748 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.672756 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.672764 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.672772 | orchestrator | 2026-03-19 00:56:46.672780 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 00:56:46.672788 | orchestrator | Thursday 19 March 2026 00:50:57 +0000 (0:00:00.765) 0:05:16.250 ******** 2026-03-19 00:56:46.672797 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.672806 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.672814 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.672823 | orchestrator | 2026-03-19 00:56:46.672831 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 00:56:46.672840 | orchestrator | Thursday 19 March 2026 00:50:57 +0000 (0:00:00.217) 0:05:16.468 ******** 2026-03-19 00:56:46.672848 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.672856 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.672864 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.672872 | orchestrator | 2026-03-19 00:56:46.672883 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 00:56:46.672892 | orchestrator | Thursday 19 March 2026 00:50:58 +0000 (0:00:00.263) 0:05:16.731 ******** 2026-03-19 00:56:46.672900 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.672908 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.672916 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.672924 | orchestrator | 2026-03-19 00:56:46.672933 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 00:56:46.672942 | orchestrator | Thursday 19 March 2026 00:50:58 +0000 (0:00:00.299) 0:05:17.030 ******** 2026-03-19 00:56:46.672950 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.672958 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.672966 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.672974 | orchestrator | 2026-03-19 00:56:46.672982 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 00:56:46.672990 | orchestrator | Thursday 19 March 2026 00:50:59 +0000 (0:00:00.862) 0:05:17.892 ******** 2026-03-19 00:56:46.672998 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.673006 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.673014 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.673023 | orchestrator | 2026-03-19 00:56:46.673031 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 00:56:46.673039 | orchestrator | Thursday 19 March 2026 00:50:59 +0000 (0:00:00.273) 0:05:18.166 ******** 2026-03-19 00:56:46.673048 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.673056 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.673063 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.673068 | orchestrator | 2026-03-19 00:56:46.673072 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 00:56:46.673077 | orchestrator | Thursday 19 March 2026 00:50:59 +0000 (0:00:00.262) 0:05:18.428 ******** 2026-03-19 00:56:46.673082 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.673087 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.673091 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.673096 | orchestrator | 2026-03-19 00:56:46.673101 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 00:56:46.673106 | orchestrator | Thursday 19 March 2026 00:51:00 +0000 (0:00:00.734) 0:05:19.163 ******** 2026-03-19 00:56:46.673110 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.673115 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.673120 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.673124 | orchestrator | 2026-03-19 00:56:46.673129 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 00:56:46.673134 | orchestrator | Thursday 19 March 2026 00:51:01 +0000 (0:00:01.023) 0:05:20.186 ******** 2026-03-19 00:56:46.673143 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.673148 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.673153 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.673157 | orchestrator | 2026-03-19 00:56:46.673162 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 00:56:46.673167 | orchestrator | Thursday 19 March 2026 00:51:01 +0000 (0:00:00.257) 0:05:20.444 ******** 2026-03-19 00:56:46.673172 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.673176 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.673181 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.673186 | orchestrator | 2026-03-19 00:56:46.673191 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 00:56:46.673195 | orchestrator | Thursday 19 March 2026 00:51:02 +0000 (0:00:00.274) 0:05:20.718 ******** 2026-03-19 00:56:46.673200 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.673205 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.673210 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.673215 | orchestrator | 2026-03-19 00:56:46.673219 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 00:56:46.673244 | orchestrator | Thursday 19 March 2026 00:51:02 +0000 (0:00:00.249) 0:05:20.968 ******** 2026-03-19 00:56:46.673249 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.673254 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.673258 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.673263 | orchestrator | 2026-03-19 00:56:46.673267 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 00:56:46.673272 | orchestrator | Thursday 19 March 2026 00:51:02 +0000 (0:00:00.466) 0:05:21.435 ******** 2026-03-19 00:56:46.673276 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.673281 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.673285 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.673290 | orchestrator | 2026-03-19 00:56:46.673294 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 00:56:46.673299 | orchestrator | Thursday 19 March 2026 00:51:03 +0000 (0:00:00.273) 0:05:21.709 ******** 2026-03-19 00:56:46.673303 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.673308 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.673312 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.673317 | orchestrator | 2026-03-19 00:56:46.673321 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 00:56:46.673326 | orchestrator | Thursday 19 March 2026 00:51:03 +0000 (0:00:00.275) 0:05:21.984 ******** 2026-03-19 00:56:46.673330 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.673335 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.673340 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.673344 | orchestrator | 2026-03-19 00:56:46.673349 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 00:56:46.673353 | orchestrator | Thursday 19 March 2026 00:51:03 +0000 (0:00:00.282) 0:05:22.267 ******** 2026-03-19 00:56:46.673358 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.673362 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.673367 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.673371 | orchestrator | 2026-03-19 00:56:46.673376 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 00:56:46.673380 | orchestrator | Thursday 19 March 2026 00:51:04 +0000 (0:00:00.451) 0:05:22.718 ******** 2026-03-19 00:56:46.673385 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.673389 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.673394 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.673398 | orchestrator | 2026-03-19 00:56:46.673406 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 00:56:46.673411 | orchestrator | Thursday 19 March 2026 00:51:04 +0000 (0:00:00.282) 0:05:23.000 ******** 2026-03-19 00:56:46.673415 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.673423 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.673427 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.673432 | orchestrator | 2026-03-19 00:56:46.673436 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-19 00:56:46.673441 | orchestrator | Thursday 19 March 2026 00:51:05 +0000 (0:00:00.507) 0:05:23.508 ******** 2026-03-19 00:56:46.673445 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 00:56:46.673450 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 00:56:46.673455 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 00:56:46.673459 | orchestrator | 2026-03-19 00:56:46.673464 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-19 00:56:46.673468 | orchestrator | Thursday 19 March 2026 00:51:05 +0000 (0:00:00.751) 0:05:24.259 ******** 2026-03-19 00:56:46.673473 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-03-19 00:56:46.673477 | orchestrator | 2026-03-19 00:56:46.673482 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-19 00:56:46.673486 | orchestrator | Thursday 19 March 2026 00:51:06 +0000 (0:00:00.620) 0:05:24.879 ******** 2026-03-19 00:56:46.673491 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.673495 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.673500 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.673504 | orchestrator | 2026-03-19 00:56:46.673509 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-19 00:56:46.673514 | orchestrator | Thursday 19 March 2026 00:51:07 +0000 (0:00:00.703) 0:05:25.582 ******** 2026-03-19 00:56:46.673523 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.673531 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.673539 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.673548 | orchestrator | 2026-03-19 00:56:46.673556 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-19 00:56:46.673564 | orchestrator | Thursday 19 March 2026 00:51:07 +0000 (0:00:00.268) 0:05:25.851 ******** 2026-03-19 00:56:46.673573 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 00:56:46.673581 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 00:56:46.673589 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 00:56:46.673597 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-19 00:56:46.673617 | orchestrator | 2026-03-19 00:56:46.673625 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-19 00:56:46.673633 | orchestrator | Thursday 19 March 2026 00:51:18 +0000 (0:00:10.671) 0:05:36.523 ******** 2026-03-19 00:56:46.673639 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.673646 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.673654 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.673661 | orchestrator | 2026-03-19 00:56:46.673668 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-19 00:56:46.673676 | orchestrator | Thursday 19 March 2026 00:51:18 +0000 (0:00:00.651) 0:05:37.174 ******** 2026-03-19 00:56:46.673683 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-19 00:56:46.673691 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-19 00:56:46.673698 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-19 00:56:46.673705 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-19 00:56:46.673713 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:56:46.673744 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:56:46.673752 | orchestrator | 2026-03-19 00:56:46.673760 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-19 00:56:46.673767 | orchestrator | Thursday 19 March 2026 00:51:21 +0000 (0:00:02.586) 0:05:39.761 ******** 2026-03-19 00:56:46.673775 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-19 00:56:46.673788 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-19 00:56:46.673796 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-19 00:56:46.673803 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 00:56:46.673811 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-19 00:56:46.673819 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-19 00:56:46.673826 | orchestrator | 2026-03-19 00:56:46.673834 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-19 00:56:46.673842 | orchestrator | Thursday 19 March 2026 00:51:22 +0000 (0:00:01.162) 0:05:40.924 ******** 2026-03-19 00:56:46.673849 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.673857 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.673865 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.673872 | orchestrator | 2026-03-19 00:56:46.673880 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-19 00:56:46.673888 | orchestrator | Thursday 19 March 2026 00:51:23 +0000 (0:00:00.746) 0:05:41.671 ******** 2026-03-19 00:56:46.673896 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.673903 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.673911 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.673919 | orchestrator | 2026-03-19 00:56:46.673926 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-19 00:56:46.673934 | orchestrator | Thursday 19 March 2026 00:51:23 +0000 (0:00:00.439) 0:05:42.110 ******** 2026-03-19 00:56:46.673942 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.673950 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.673957 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.673965 | orchestrator | 2026-03-19 00:56:46.673973 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-19 00:56:46.673980 | orchestrator | Thursday 19 March 2026 00:51:23 +0000 (0:00:00.253) 0:05:42.363 ******** 2026-03-19 00:56:46.673992 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.674000 | orchestrator | 2026-03-19 00:56:46.674008 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-19 00:56:46.674043 | orchestrator | Thursday 19 March 2026 00:51:24 +0000 (0:00:00.447) 0:05:42.811 ******** 2026-03-19 00:56:46.674051 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.674059 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.674067 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.674074 | orchestrator | 2026-03-19 00:56:46.674082 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-19 00:56:46.674090 | orchestrator | Thursday 19 March 2026 00:51:24 +0000 (0:00:00.320) 0:05:43.131 ******** 2026-03-19 00:56:46.674097 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.674105 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.674113 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.674120 | orchestrator | 2026-03-19 00:56:46.674128 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-19 00:56:46.674136 | orchestrator | Thursday 19 March 2026 00:51:25 +0000 (0:00:00.448) 0:05:43.579 ******** 2026-03-19 00:56:46.674144 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.674152 | orchestrator | 2026-03-19 00:56:46.674159 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-19 00:56:46.674167 | orchestrator | Thursday 19 March 2026 00:51:25 +0000 (0:00:00.445) 0:05:44.025 ******** 2026-03-19 00:56:46.674175 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.674183 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.674190 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.674198 | orchestrator | 2026-03-19 00:56:46.674206 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-19 00:56:46.674213 | orchestrator | Thursday 19 March 2026 00:51:26 +0000 (0:00:01.256) 0:05:45.281 ******** 2026-03-19 00:56:46.674225 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.674233 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.674241 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.674249 | orchestrator | 2026-03-19 00:56:46.674258 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-19 00:56:46.674266 | orchestrator | Thursday 19 March 2026 00:51:28 +0000 (0:00:01.270) 0:05:46.552 ******** 2026-03-19 00:56:46.674274 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.674282 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.674290 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.674298 | orchestrator | 2026-03-19 00:56:46.674306 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-19 00:56:46.674314 | orchestrator | Thursday 19 March 2026 00:51:29 +0000 (0:00:01.731) 0:05:48.283 ******** 2026-03-19 00:56:46.674322 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.674330 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.674338 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.674345 | orchestrator | 2026-03-19 00:56:46.674353 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-19 00:56:46.674360 | orchestrator | Thursday 19 March 2026 00:51:31 +0000 (0:00:01.926) 0:05:50.210 ******** 2026-03-19 00:56:46.674368 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.674376 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.674384 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-19 00:56:46.674392 | orchestrator | 2026-03-19 00:56:46.674399 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-19 00:56:46.674407 | orchestrator | Thursday 19 March 2026 00:51:32 +0000 (0:00:00.377) 0:05:50.588 ******** 2026-03-19 00:56:46.674435 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-19 00:56:46.674441 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-19 00:56:46.674446 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-19 00:56:46.674450 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-19 00:56:46.674455 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-19 00:56:46.674459 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-03-19 00:56:46.674464 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-19 00:56:46.674469 | orchestrator | 2026-03-19 00:56:46.674473 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-19 00:56:46.674482 | orchestrator | Thursday 19 March 2026 00:52:08 +0000 (0:00:36.689) 0:06:27.278 ******** 2026-03-19 00:56:46.674490 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-19 00:56:46.674498 | orchestrator | 2026-03-19 00:56:46.674506 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-19 00:56:46.674514 | orchestrator | Thursday 19 March 2026 00:52:10 +0000 (0:00:01.292) 0:06:28.571 ******** 2026-03-19 00:56:46.674523 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.674531 | orchestrator | 2026-03-19 00:56:46.674539 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-19 00:56:46.674547 | orchestrator | Thursday 19 March 2026 00:52:10 +0000 (0:00:00.285) 0:06:28.856 ******** 2026-03-19 00:56:46.674555 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.674564 | orchestrator | 2026-03-19 00:56:46.674573 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-19 00:56:46.674578 | orchestrator | Thursday 19 March 2026 00:52:10 +0000 (0:00:00.153) 0:06:29.010 ******** 2026-03-19 00:56:46.674589 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-19 00:56:46.674597 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-19 00:56:46.674616 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-19 00:56:46.674622 | orchestrator | 2026-03-19 00:56:46.674627 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-19 00:56:46.674631 | orchestrator | Thursday 19 March 2026 00:52:16 +0000 (0:00:06.414) 0:06:35.425 ******** 2026-03-19 00:56:46.674636 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-19 00:56:46.674640 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-19 00:56:46.674645 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-19 00:56:46.674649 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-19 00:56:46.674655 | orchestrator | 2026-03-19 00:56:46.674662 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-19 00:56:46.674670 | orchestrator | Thursday 19 March 2026 00:52:21 +0000 (0:00:04.838) 0:06:40.263 ******** 2026-03-19 00:56:46.674677 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.674685 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.674692 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.674700 | orchestrator | 2026-03-19 00:56:46.674707 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-19 00:56:46.674715 | orchestrator | Thursday 19 March 2026 00:52:22 +0000 (0:00:00.960) 0:06:41.223 ******** 2026-03-19 00:56:46.674722 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.674730 | orchestrator | 2026-03-19 00:56:46.674737 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-19 00:56:46.674745 | orchestrator | Thursday 19 March 2026 00:52:23 +0000 (0:00:00.518) 0:06:41.742 ******** 2026-03-19 00:56:46.674752 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.674760 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.674768 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.674776 | orchestrator | 2026-03-19 00:56:46.674783 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-19 00:56:46.674791 | orchestrator | Thursday 19 March 2026 00:52:23 +0000 (0:00:00.326) 0:06:42.069 ******** 2026-03-19 00:56:46.674799 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.674806 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.674814 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.674822 | orchestrator | 2026-03-19 00:56:46.674829 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-19 00:56:46.674837 | orchestrator | Thursday 19 March 2026 00:52:25 +0000 (0:00:01.635) 0:06:43.704 ******** 2026-03-19 00:56:46.674845 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 00:56:46.674852 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 00:56:46.674860 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 00:56:46.674868 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.674876 | orchestrator | 2026-03-19 00:56:46.674883 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-19 00:56:46.674891 | orchestrator | Thursday 19 March 2026 00:52:25 +0000 (0:00:00.578) 0:06:44.282 ******** 2026-03-19 00:56:46.674899 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.674907 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.674914 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.674922 | orchestrator | 2026-03-19 00:56:46.674930 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-19 00:56:46.674938 | orchestrator | 2026-03-19 00:56:46.674945 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 00:56:46.674979 | orchestrator | Thursday 19 March 2026 00:52:26 +0000 (0:00:00.540) 0:06:44.822 ******** 2026-03-19 00:56:46.674994 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.675002 | orchestrator | 2026-03-19 00:56:46.675010 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 00:56:46.675017 | orchestrator | Thursday 19 March 2026 00:52:27 +0000 (0:00:00.693) 0:06:45.515 ******** 2026-03-19 00:56:46.675025 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.675033 | orchestrator | 2026-03-19 00:56:46.675040 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 00:56:46.675048 | orchestrator | Thursday 19 March 2026 00:52:27 +0000 (0:00:00.512) 0:06:46.028 ******** 2026-03-19 00:56:46.675056 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.675063 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.675071 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.675079 | orchestrator | 2026-03-19 00:56:46.675086 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 00:56:46.675094 | orchestrator | Thursday 19 March 2026 00:52:27 +0000 (0:00:00.274) 0:06:46.303 ******** 2026-03-19 00:56:46.675102 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.675110 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.675118 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.675126 | orchestrator | 2026-03-19 00:56:46.675134 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 00:56:46.675142 | orchestrator | Thursday 19 March 2026 00:52:28 +0000 (0:00:01.003) 0:06:47.306 ******** 2026-03-19 00:56:46.675150 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.675158 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.675166 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.675174 | orchestrator | 2026-03-19 00:56:46.675181 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 00:56:46.675188 | orchestrator | Thursday 19 March 2026 00:52:29 +0000 (0:00:00.709) 0:06:48.015 ******** 2026-03-19 00:56:46.675196 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.675203 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.675215 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.675222 | orchestrator | 2026-03-19 00:56:46.675230 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 00:56:46.675237 | orchestrator | Thursday 19 March 2026 00:52:30 +0000 (0:00:00.761) 0:06:48.776 ******** 2026-03-19 00:56:46.675245 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.675252 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.675260 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.675267 | orchestrator | 2026-03-19 00:56:46.675275 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 00:56:46.675282 | orchestrator | Thursday 19 March 2026 00:52:30 +0000 (0:00:00.356) 0:06:49.132 ******** 2026-03-19 00:56:46.675290 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.675298 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.675306 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.675313 | orchestrator | 2026-03-19 00:56:46.675321 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 00:56:46.675329 | orchestrator | Thursday 19 March 2026 00:52:31 +0000 (0:00:00.548) 0:06:49.680 ******** 2026-03-19 00:56:46.675336 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.675344 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.675351 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.675359 | orchestrator | 2026-03-19 00:56:46.675367 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 00:56:46.675374 | orchestrator | Thursday 19 March 2026 00:52:31 +0000 (0:00:00.301) 0:06:49.982 ******** 2026-03-19 00:56:46.675382 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.675389 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.675402 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.675409 | orchestrator | 2026-03-19 00:56:46.675417 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 00:56:46.675424 | orchestrator | Thursday 19 March 2026 00:52:32 +0000 (0:00:00.717) 0:06:50.700 ******** 2026-03-19 00:56:46.675431 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.675439 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.675446 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.675454 | orchestrator | 2026-03-19 00:56:46.675462 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 00:56:46.675469 | orchestrator | Thursday 19 March 2026 00:52:32 +0000 (0:00:00.696) 0:06:51.397 ******** 2026-03-19 00:56:46.675476 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.675484 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.675491 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.675499 | orchestrator | 2026-03-19 00:56:46.675506 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 00:56:46.675513 | orchestrator | Thursday 19 March 2026 00:52:33 +0000 (0:00:00.525) 0:06:51.923 ******** 2026-03-19 00:56:46.675521 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.675529 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.675535 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.675540 | orchestrator | 2026-03-19 00:56:46.675544 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 00:56:46.675549 | orchestrator | Thursday 19 March 2026 00:52:33 +0000 (0:00:00.301) 0:06:52.224 ******** 2026-03-19 00:56:46.675554 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.675558 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.675563 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.675567 | orchestrator | 2026-03-19 00:56:46.675572 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 00:56:46.675576 | orchestrator | Thursday 19 March 2026 00:52:34 +0000 (0:00:00.318) 0:06:52.543 ******** 2026-03-19 00:56:46.675580 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.675585 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.675589 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.675594 | orchestrator | 2026-03-19 00:56:46.675598 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 00:56:46.675637 | orchestrator | Thursday 19 March 2026 00:52:34 +0000 (0:00:00.297) 0:06:52.841 ******** 2026-03-19 00:56:46.675642 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.675647 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.675651 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.675656 | orchestrator | 2026-03-19 00:56:46.675660 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 00:56:46.675665 | orchestrator | Thursday 19 March 2026 00:52:34 +0000 (0:00:00.618) 0:06:53.459 ******** 2026-03-19 00:56:46.675669 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.675674 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.675679 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.675683 | orchestrator | 2026-03-19 00:56:46.675688 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 00:56:46.675692 | orchestrator | Thursday 19 March 2026 00:52:35 +0000 (0:00:00.331) 0:06:53.790 ******** 2026-03-19 00:56:46.675697 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.675701 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.675706 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.675710 | orchestrator | 2026-03-19 00:56:46.675715 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 00:56:46.675719 | orchestrator | Thursday 19 March 2026 00:52:35 +0000 (0:00:00.309) 0:06:54.100 ******** 2026-03-19 00:56:46.675724 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.675728 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.675733 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.675737 | orchestrator | 2026-03-19 00:56:46.675745 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 00:56:46.675750 | orchestrator | Thursday 19 March 2026 00:52:35 +0000 (0:00:00.288) 0:06:54.388 ******** 2026-03-19 00:56:46.675754 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.675759 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.675763 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.675768 | orchestrator | 2026-03-19 00:56:46.675772 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 00:56:46.675777 | orchestrator | Thursday 19 March 2026 00:52:36 +0000 (0:00:00.569) 0:06:54.958 ******** 2026-03-19 00:56:46.675781 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.675786 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.675790 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.675795 | orchestrator | 2026-03-19 00:56:46.675802 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-19 00:56:46.675807 | orchestrator | Thursday 19 March 2026 00:52:37 +0000 (0:00:00.523) 0:06:55.481 ******** 2026-03-19 00:56:46.675811 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.675816 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.675820 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.675825 | orchestrator | 2026-03-19 00:56:46.675829 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-19 00:56:46.675834 | orchestrator | Thursday 19 March 2026 00:52:37 +0000 (0:00:00.303) 0:06:55.785 ******** 2026-03-19 00:56:46.675839 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 00:56:46.675846 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 00:56:46.675857 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 00:56:46.675864 | orchestrator | 2026-03-19 00:56:46.675870 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-19 00:56:46.675876 | orchestrator | Thursday 19 March 2026 00:52:38 +0000 (0:00:00.912) 0:06:56.697 ******** 2026-03-19 00:56:46.675882 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.675888 | orchestrator | 2026-03-19 00:56:46.675894 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-19 00:56:46.675900 | orchestrator | Thursday 19 March 2026 00:52:39 +0000 (0:00:00.814) 0:06:57.511 ******** 2026-03-19 00:56:46.675905 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.675911 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.675918 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.675923 | orchestrator | 2026-03-19 00:56:46.675929 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-19 00:56:46.675935 | orchestrator | Thursday 19 March 2026 00:52:39 +0000 (0:00:00.292) 0:06:57.804 ******** 2026-03-19 00:56:46.675941 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.675947 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.675953 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.675960 | orchestrator | 2026-03-19 00:56:46.675966 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-19 00:56:46.675972 | orchestrator | Thursday 19 March 2026 00:52:39 +0000 (0:00:00.324) 0:06:58.129 ******** 2026-03-19 00:56:46.675978 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.675984 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.675991 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.675998 | orchestrator | 2026-03-19 00:56:46.676004 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-19 00:56:46.676010 | orchestrator | Thursday 19 March 2026 00:52:40 +0000 (0:00:00.981) 0:06:59.110 ******** 2026-03-19 00:56:46.676017 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.676023 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.676029 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.676036 | orchestrator | 2026-03-19 00:56:46.676042 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-19 00:56:46.676055 | orchestrator | Thursday 19 March 2026 00:52:40 +0000 (0:00:00.330) 0:06:59.440 ******** 2026-03-19 00:56:46.676062 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-19 00:56:46.676069 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-19 00:56:46.676077 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-19 00:56:46.676088 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-19 00:56:46.676092 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-19 00:56:46.676096 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-19 00:56:46.676100 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-19 00:56:46.676104 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-19 00:56:46.676108 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-19 00:56:46.676112 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-19 00:56:46.676116 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-19 00:56:46.676120 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-19 00:56:46.676124 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-19 00:56:46.676128 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-19 00:56:46.676132 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-19 00:56:46.676137 | orchestrator | 2026-03-19 00:56:46.676141 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-19 00:56:46.676145 | orchestrator | Thursday 19 March 2026 00:52:43 +0000 (0:00:02.369) 0:07:01.810 ******** 2026-03-19 00:56:46.676149 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.676153 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.676157 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.676161 | orchestrator | 2026-03-19 00:56:46.676165 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-19 00:56:46.676169 | orchestrator | Thursday 19 March 2026 00:52:43 +0000 (0:00:00.287) 0:07:02.098 ******** 2026-03-19 00:56:46.676176 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.676180 | orchestrator | 2026-03-19 00:56:46.676184 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-19 00:56:46.676189 | orchestrator | Thursday 19 March 2026 00:52:44 +0000 (0:00:00.753) 0:07:02.852 ******** 2026-03-19 00:56:46.676193 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-19 00:56:46.676197 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-19 00:56:46.676201 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-19 00:56:46.676205 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-19 00:56:46.676209 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-19 00:56:46.676213 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-19 00:56:46.676217 | orchestrator | 2026-03-19 00:56:46.676221 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-19 00:56:46.676225 | orchestrator | Thursday 19 March 2026 00:52:45 +0000 (0:00:01.076) 0:07:03.929 ******** 2026-03-19 00:56:46.676229 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:56:46.676233 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 00:56:46.676240 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 00:56:46.676244 | orchestrator | 2026-03-19 00:56:46.676248 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-19 00:56:46.676253 | orchestrator | Thursday 19 March 2026 00:52:47 +0000 (0:00:02.444) 0:07:06.373 ******** 2026-03-19 00:56:46.676257 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-19 00:56:46.676261 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 00:56:46.676265 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.676269 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-19 00:56:46.676273 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-19 00:56:46.676277 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.676281 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-19 00:56:46.676285 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-19 00:56:46.676289 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.676293 | orchestrator | 2026-03-19 00:56:46.676297 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-19 00:56:46.676301 | orchestrator | Thursday 19 March 2026 00:52:49 +0000 (0:00:01.724) 0:07:08.098 ******** 2026-03-19 00:56:46.676305 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 00:56:46.676309 | orchestrator | 2026-03-19 00:56:46.676313 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-19 00:56:46.676318 | orchestrator | Thursday 19 March 2026 00:52:51 +0000 (0:00:02.285) 0:07:10.383 ******** 2026-03-19 00:56:46.676322 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.676326 | orchestrator | 2026-03-19 00:56:46.676330 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-19 00:56:46.676334 | orchestrator | Thursday 19 March 2026 00:52:52 +0000 (0:00:00.503) 0:07:10.887 ******** 2026-03-19 00:56:46.676338 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c9339aa0-dcb3-5462-b16c-1d446efe678c', 'data_vg': 'ceph-c9339aa0-dcb3-5462-b16c-1d446efe678c'}) 2026-03-19 00:56:46.676343 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f7952abd-f19d-5f54-b846-7c46d615b8fb', 'data_vg': 'ceph-f7952abd-f19d-5f54-b846-7c46d615b8fb'}) 2026-03-19 00:56:46.676349 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0', 'data_vg': 'ceph-24d614e2-ec6e-5ed2-9057-307e4a3cb0c0'}) 2026-03-19 00:56:46.676354 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0813f2fe-0b5e-5f32-866c-c0f68041cbc1', 'data_vg': 'ceph-0813f2fe-0b5e-5f32-866c-c0f68041cbc1'}) 2026-03-19 00:56:46.676358 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-056512d9-3a02-5302-afc2-fa0158449af3', 'data_vg': 'ceph-056512d9-3a02-5302-afc2-fa0158449af3'}) 2026-03-19 00:56:46.676362 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d672a78a-4132-5655-a0fe-bae0f8eb714c', 'data_vg': 'ceph-d672a78a-4132-5655-a0fe-bae0f8eb714c'}) 2026-03-19 00:56:46.676366 | orchestrator | 2026-03-19 00:56:46.676370 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-19 00:56:46.676374 | orchestrator | Thursday 19 March 2026 00:53:36 +0000 (0:00:43.696) 0:07:54.584 ******** 2026-03-19 00:56:46.676378 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.676382 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.676387 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.676391 | orchestrator | 2026-03-19 00:56:46.676395 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-19 00:56:46.676399 | orchestrator | Thursday 19 March 2026 00:53:36 +0000 (0:00:00.429) 0:07:55.013 ******** 2026-03-19 00:56:46.676403 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.676407 | orchestrator | 2026-03-19 00:56:46.676414 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-19 00:56:46.676418 | orchestrator | Thursday 19 March 2026 00:53:36 +0000 (0:00:00.447) 0:07:55.461 ******** 2026-03-19 00:56:46.676422 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.676427 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.676431 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.676435 | orchestrator | 2026-03-19 00:56:46.676439 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-19 00:56:46.676445 | orchestrator | Thursday 19 March 2026 00:53:37 +0000 (0:00:00.615) 0:07:56.077 ******** 2026-03-19 00:56:46.676449 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.676453 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.676457 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.676462 | orchestrator | 2026-03-19 00:56:46.676466 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-19 00:56:46.676470 | orchestrator | Thursday 19 March 2026 00:53:40 +0000 (0:00:02.741) 0:07:58.818 ******** 2026-03-19 00:56:46.676474 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.676478 | orchestrator | 2026-03-19 00:56:46.676482 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-19 00:56:46.676486 | orchestrator | Thursday 19 March 2026 00:53:40 +0000 (0:00:00.537) 0:07:59.355 ******** 2026-03-19 00:56:46.676490 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.676495 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.676499 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.676503 | orchestrator | 2026-03-19 00:56:46.676507 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-19 00:56:46.676511 | orchestrator | Thursday 19 March 2026 00:53:42 +0000 (0:00:01.166) 0:08:00.522 ******** 2026-03-19 00:56:46.676515 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.676519 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.676523 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.676527 | orchestrator | 2026-03-19 00:56:46.676531 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-19 00:56:46.676535 | orchestrator | Thursday 19 March 2026 00:53:43 +0000 (0:00:01.583) 0:08:02.106 ******** 2026-03-19 00:56:46.676539 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.676543 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.676548 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.676552 | orchestrator | 2026-03-19 00:56:46.676556 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-19 00:56:46.676560 | orchestrator | Thursday 19 March 2026 00:53:45 +0000 (0:00:01.845) 0:08:03.952 ******** 2026-03-19 00:56:46.676564 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.676568 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.676572 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.676576 | orchestrator | 2026-03-19 00:56:46.676580 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-19 00:56:46.676584 | orchestrator | Thursday 19 March 2026 00:53:45 +0000 (0:00:00.328) 0:08:04.281 ******** 2026-03-19 00:56:46.676588 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.676593 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.676599 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.676616 | orchestrator | 2026-03-19 00:56:46.676623 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-19 00:56:46.676630 | orchestrator | Thursday 19 March 2026 00:53:46 +0000 (0:00:00.312) 0:08:04.596 ******** 2026-03-19 00:56:46.676638 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-19 00:56:46.676644 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-03-19 00:56:46.676650 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-03-19 00:56:46.676656 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-19 00:56:46.676662 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-19 00:56:46.676674 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-19 00:56:46.676681 | orchestrator | 2026-03-19 00:56:46.676688 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-19 00:56:46.676695 | orchestrator | Thursday 19 March 2026 00:53:47 +0000 (0:00:01.391) 0:08:05.987 ******** 2026-03-19 00:56:46.676702 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-19 00:56:46.676709 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-19 00:56:46.676721 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-19 00:56:46.676728 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-19 00:56:46.676734 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-19 00:56:46.676742 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-19 00:56:46.676749 | orchestrator | 2026-03-19 00:56:46.676756 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-19 00:56:46.676763 | orchestrator | Thursday 19 March 2026 00:53:49 +0000 (0:00:02.317) 0:08:08.305 ******** 2026-03-19 00:56:46.676769 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-19 00:56:46.676777 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-19 00:56:46.676784 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-19 00:56:46.676791 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-19 00:56:46.676798 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-19 00:56:46.676805 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-19 00:56:46.676812 | orchestrator | 2026-03-19 00:56:46.676822 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-19 00:56:46.676829 | orchestrator | Thursday 19 March 2026 00:53:53 +0000 (0:00:03.781) 0:08:12.086 ******** 2026-03-19 00:56:46.676835 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.676842 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.676848 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-19 00:56:46.676855 | orchestrator | 2026-03-19 00:56:46.676862 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-19 00:56:46.676869 | orchestrator | Thursday 19 March 2026 00:53:55 +0000 (0:00:02.096) 0:08:14.183 ******** 2026-03-19 00:56:46.676875 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.676882 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.676889 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-19 00:56:46.676895 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-19 00:56:46.676901 | orchestrator | 2026-03-19 00:56:46.676907 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-19 00:56:46.676912 | orchestrator | Thursday 19 March 2026 00:54:08 +0000 (0:00:12.811) 0:08:26.995 ******** 2026-03-19 00:56:46.676918 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.676929 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.676936 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.676942 | orchestrator | 2026-03-19 00:56:46.676949 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-19 00:56:46.676956 | orchestrator | Thursday 19 March 2026 00:54:09 +0000 (0:00:00.932) 0:08:27.927 ******** 2026-03-19 00:56:46.676963 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.676970 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.676976 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.676982 | orchestrator | 2026-03-19 00:56:46.676990 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-19 00:56:46.677001 | orchestrator | Thursday 19 March 2026 00:54:10 +0000 (0:00:00.557) 0:08:28.485 ******** 2026-03-19 00:56:46.677007 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.677014 | orchestrator | 2026-03-19 00:56:46.677021 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-19 00:56:46.677028 | orchestrator | Thursday 19 March 2026 00:54:10 +0000 (0:00:00.499) 0:08:28.985 ******** 2026-03-19 00:56:46.677041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:56:46.677047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:56:46.677053 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:56:46.677060 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677067 | orchestrator | 2026-03-19 00:56:46.677073 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-19 00:56:46.677080 | orchestrator | Thursday 19 March 2026 00:54:10 +0000 (0:00:00.318) 0:08:29.303 ******** 2026-03-19 00:56:46.677086 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677092 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.677099 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.677105 | orchestrator | 2026-03-19 00:56:46.677112 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-19 00:56:46.677118 | orchestrator | Thursday 19 March 2026 00:54:11 +0000 (0:00:00.291) 0:08:29.594 ******** 2026-03-19 00:56:46.677124 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677130 | orchestrator | 2026-03-19 00:56:46.677137 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-19 00:56:46.677143 | orchestrator | Thursday 19 March 2026 00:54:11 +0000 (0:00:00.170) 0:08:29.765 ******** 2026-03-19 00:56:46.677149 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677156 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.677163 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.677170 | orchestrator | 2026-03-19 00:56:46.677178 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-19 00:56:46.677184 | orchestrator | Thursday 19 March 2026 00:54:11 +0000 (0:00:00.462) 0:08:30.227 ******** 2026-03-19 00:56:46.677191 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677198 | orchestrator | 2026-03-19 00:56:46.677205 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-19 00:56:46.677211 | orchestrator | Thursday 19 March 2026 00:54:11 +0000 (0:00:00.207) 0:08:30.435 ******** 2026-03-19 00:56:46.677215 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677219 | orchestrator | 2026-03-19 00:56:46.677223 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-19 00:56:46.677227 | orchestrator | Thursday 19 March 2026 00:54:12 +0000 (0:00:00.197) 0:08:30.632 ******** 2026-03-19 00:56:46.677234 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677241 | orchestrator | 2026-03-19 00:56:46.677248 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-19 00:56:46.677255 | orchestrator | Thursday 19 March 2026 00:54:12 +0000 (0:00:00.104) 0:08:30.737 ******** 2026-03-19 00:56:46.677268 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677276 | orchestrator | 2026-03-19 00:56:46.677283 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-19 00:56:46.677291 | orchestrator | Thursday 19 March 2026 00:54:12 +0000 (0:00:00.199) 0:08:30.936 ******** 2026-03-19 00:56:46.677298 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677305 | orchestrator | 2026-03-19 00:56:46.677310 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-19 00:56:46.677314 | orchestrator | Thursday 19 March 2026 00:54:12 +0000 (0:00:00.184) 0:08:31.121 ******** 2026-03-19 00:56:46.677318 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:56:46.677322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:56:46.677327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:56:46.677334 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677341 | orchestrator | 2026-03-19 00:56:46.677348 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-19 00:56:46.677356 | orchestrator | Thursday 19 March 2026 00:54:13 +0000 (0:00:00.370) 0:08:31.491 ******** 2026-03-19 00:56:46.677363 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677380 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.677387 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.677394 | orchestrator | 2026-03-19 00:56:46.677400 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-19 00:56:46.677404 | orchestrator | Thursday 19 March 2026 00:54:13 +0000 (0:00:00.279) 0:08:31.771 ******** 2026-03-19 00:56:46.677408 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677412 | orchestrator | 2026-03-19 00:56:46.677416 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-19 00:56:46.677421 | orchestrator | Thursday 19 March 2026 00:54:13 +0000 (0:00:00.533) 0:08:32.304 ******** 2026-03-19 00:56:46.677428 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677435 | orchestrator | 2026-03-19 00:56:46.677441 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-19 00:56:46.677448 | orchestrator | 2026-03-19 00:56:46.677455 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 00:56:46.677466 | orchestrator | Thursday 19 March 2026 00:54:14 +0000 (0:00:00.557) 0:08:32.861 ******** 2026-03-19 00:56:46.677474 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-4, testbed-node-2 2026-03-19 00:56:46.677481 | orchestrator | 2026-03-19 00:56:46.677488 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 00:56:46.677495 | orchestrator | Thursday 19 March 2026 00:54:15 +0000 (0:00:01.139) 0:08:34.001 ******** 2026-03-19 00:56:46.677502 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.677510 | orchestrator | 2026-03-19 00:56:46.677516 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 00:56:46.677523 | orchestrator | Thursday 19 March 2026 00:54:16 +0000 (0:00:01.051) 0:08:35.052 ******** 2026-03-19 00:56:46.677529 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677536 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.677544 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.677551 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.677558 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.677564 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.677571 | orchestrator | 2026-03-19 00:56:46.677578 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 00:56:46.677585 | orchestrator | Thursday 19 March 2026 00:54:17 +0000 (0:00:01.228) 0:08:36.280 ******** 2026-03-19 00:56:46.677591 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.677599 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.677640 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.677647 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.677655 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.677661 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.677668 | orchestrator | 2026-03-19 00:56:46.677675 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 00:56:46.677682 | orchestrator | Thursday 19 March 2026 00:54:18 +0000 (0:00:00.926) 0:08:37.207 ******** 2026-03-19 00:56:46.677688 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.677695 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.677703 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.677710 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.677717 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.677724 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.677730 | orchestrator | 2026-03-19 00:56:46.677737 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 00:56:46.677744 | orchestrator | Thursday 19 March 2026 00:54:20 +0000 (0:00:01.406) 0:08:38.613 ******** 2026-03-19 00:56:46.677751 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.677762 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.677769 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.677776 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.677783 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.677790 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.677797 | orchestrator | 2026-03-19 00:56:46.677804 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 00:56:46.677811 | orchestrator | Thursday 19 March 2026 00:54:21 +0000 (0:00:00.951) 0:08:39.565 ******** 2026-03-19 00:56:46.677818 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677825 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.677832 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.677839 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.677845 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.677852 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.677859 | orchestrator | 2026-03-19 00:56:46.677865 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 00:56:46.677877 | orchestrator | Thursday 19 March 2026 00:54:22 +0000 (0:00:01.290) 0:08:40.856 ******** 2026-03-19 00:56:46.677884 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677891 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.677898 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.677905 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.677911 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.677918 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.677925 | orchestrator | 2026-03-19 00:56:46.677931 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 00:56:46.677938 | orchestrator | Thursday 19 March 2026 00:54:23 +0000 (0:00:00.674) 0:08:41.530 ******** 2026-03-19 00:56:46.677945 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.677951 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.677958 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.677966 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.677972 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.677979 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.677986 | orchestrator | 2026-03-19 00:56:46.677993 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 00:56:46.678000 | orchestrator | Thursday 19 March 2026 00:54:23 +0000 (0:00:00.608) 0:08:42.139 ******** 2026-03-19 00:56:46.678006 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.678040 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.678049 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.678057 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.678064 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.678072 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.678079 | orchestrator | 2026-03-19 00:56:46.678087 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 00:56:46.678094 | orchestrator | Thursday 19 March 2026 00:54:25 +0000 (0:00:01.791) 0:08:43.930 ******** 2026-03-19 00:56:46.678101 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.678109 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.678116 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.678123 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.678131 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.678138 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.678146 | orchestrator | 2026-03-19 00:56:46.678153 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 00:56:46.678160 | orchestrator | Thursday 19 March 2026 00:54:26 +0000 (0:00:01.160) 0:08:45.090 ******** 2026-03-19 00:56:46.678167 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.678178 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.678186 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.678193 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.678200 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.678207 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.678219 | orchestrator | 2026-03-19 00:56:46.678227 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 00:56:46.678234 | orchestrator | Thursday 19 March 2026 00:54:27 +0000 (0:00:01.169) 0:08:46.260 ******** 2026-03-19 00:56:46.678241 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.678247 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.678254 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.678261 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.678267 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.678274 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.678281 | orchestrator | 2026-03-19 00:56:46.678287 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 00:56:46.678293 | orchestrator | Thursday 19 March 2026 00:54:28 +0000 (0:00:00.785) 0:08:47.045 ******** 2026-03-19 00:56:46.678300 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.678307 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.678314 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.678321 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.678328 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.678335 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.678342 | orchestrator | 2026-03-19 00:56:46.678348 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 00:56:46.678355 | orchestrator | Thursday 19 March 2026 00:54:29 +0000 (0:00:00.880) 0:08:47.925 ******** 2026-03-19 00:56:46.678362 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.678368 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.678375 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.678381 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.678388 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.678395 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.678401 | orchestrator | 2026-03-19 00:56:46.678408 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 00:56:46.678415 | orchestrator | Thursday 19 March 2026 00:54:30 +0000 (0:00:00.817) 0:08:48.742 ******** 2026-03-19 00:56:46.678421 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.678428 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.678435 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.678442 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.678448 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.678455 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.678462 | orchestrator | 2026-03-19 00:56:46.678468 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 00:56:46.678475 | orchestrator | Thursday 19 March 2026 00:54:31 +0000 (0:00:01.053) 0:08:49.796 ******** 2026-03-19 00:56:46.678482 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.678488 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.678495 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.678503 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.678510 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.678516 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.678523 | orchestrator | 2026-03-19 00:56:46.678530 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 00:56:46.678537 | orchestrator | Thursday 19 March 2026 00:54:32 +0000 (0:00:00.745) 0:08:50.541 ******** 2026-03-19 00:56:46.678544 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.678550 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.678557 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.678564 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:46.678571 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:46.678577 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:46.678584 | orchestrator | 2026-03-19 00:56:46.678591 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 00:56:46.678616 | orchestrator | Thursday 19 March 2026 00:54:32 +0000 (0:00:00.687) 0:08:51.229 ******** 2026-03-19 00:56:46.678627 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.678634 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.678640 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.678647 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.678654 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.678660 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.678667 | orchestrator | 2026-03-19 00:56:46.678673 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 00:56:46.678679 | orchestrator | Thursday 19 March 2026 00:54:33 +0000 (0:00:00.537) 0:08:51.766 ******** 2026-03-19 00:56:46.678685 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.678692 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.678698 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.678705 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.678711 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.678717 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.678723 | orchestrator | 2026-03-19 00:56:46.678730 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 00:56:46.678736 | orchestrator | Thursday 19 March 2026 00:54:34 +0000 (0:00:00.741) 0:08:52.508 ******** 2026-03-19 00:56:46.678742 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.678748 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.678754 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.678760 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.678766 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.678772 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.678779 | orchestrator | 2026-03-19 00:56:46.678785 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-19 00:56:46.678791 | orchestrator | Thursday 19 March 2026 00:54:35 +0000 (0:00:01.229) 0:08:53.738 ******** 2026-03-19 00:56:46.678797 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 00:56:46.678803 | orchestrator | 2026-03-19 00:56:46.678810 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-19 00:56:46.678816 | orchestrator | Thursday 19 March 2026 00:54:39 +0000 (0:00:03.960) 0:08:57.699 ******** 2026-03-19 00:56:46.678822 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 00:56:46.678829 | orchestrator | 2026-03-19 00:56:46.678835 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-19 00:56:46.678844 | orchestrator | Thursday 19 March 2026 00:54:41 +0000 (0:00:01.863) 0:08:59.562 ******** 2026-03-19 00:56:46.678851 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.678857 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.678863 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.678869 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.678875 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.678881 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.678887 | orchestrator | 2026-03-19 00:56:46.678894 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-19 00:56:46.678901 | orchestrator | Thursday 19 March 2026 00:54:42 +0000 (0:00:01.326) 0:09:00.889 ******** 2026-03-19 00:56:46.678907 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.678914 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.678920 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.678926 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.678932 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.678938 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.678945 | orchestrator | 2026-03-19 00:56:46.678951 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-19 00:56:46.678957 | orchestrator | Thursday 19 March 2026 00:54:43 +0000 (0:00:01.174) 0:09:02.063 ******** 2026-03-19 00:56:46.678964 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.678971 | orchestrator | 2026-03-19 00:56:46.678982 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-19 00:56:46.678989 | orchestrator | Thursday 19 March 2026 00:54:44 +0000 (0:00:01.258) 0:09:03.322 ******** 2026-03-19 00:56:46.678995 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.679001 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.679007 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.679014 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.679020 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.679026 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.679032 | orchestrator | 2026-03-19 00:56:46.679038 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-19 00:56:46.679045 | orchestrator | Thursday 19 March 2026 00:54:46 +0000 (0:00:01.397) 0:09:04.720 ******** 2026-03-19 00:56:46.679051 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.679057 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.679064 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.679070 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.679076 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.679083 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.679089 | orchestrator | 2026-03-19 00:56:46.679095 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-19 00:56:46.679101 | orchestrator | Thursday 19 March 2026 00:54:49 +0000 (0:00:03.375) 0:09:08.095 ******** 2026-03-19 00:56:46.679108 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:46.679114 | orchestrator | 2026-03-19 00:56:46.679120 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-19 00:56:46.679127 | orchestrator | Thursday 19 March 2026 00:54:50 +0000 (0:00:01.288) 0:09:09.384 ******** 2026-03-19 00:56:46.679133 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.679139 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.679146 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.679152 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.679159 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.679165 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.679171 | orchestrator | 2026-03-19 00:56:46.679177 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-19 00:56:46.679187 | orchestrator | Thursday 19 March 2026 00:54:51 +0000 (0:00:00.600) 0:09:09.985 ******** 2026-03-19 00:56:46.679193 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.679200 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.679206 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.679212 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:46.679217 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:46.679223 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:46.679230 | orchestrator | 2026-03-19 00:56:46.679236 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-19 00:56:46.679243 | orchestrator | Thursday 19 March 2026 00:54:54 +0000 (0:00:02.784) 0:09:12.770 ******** 2026-03-19 00:56:46.679250 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.679256 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.679262 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.679268 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:46.679274 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:46.679280 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:46.679286 | orchestrator | 2026-03-19 00:56:46.679293 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-19 00:56:46.679299 | orchestrator | 2026-03-19 00:56:46.679305 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 00:56:46.679311 | orchestrator | Thursday 19 March 2026 00:54:55 +0000 (0:00:00.851) 0:09:13.621 ******** 2026-03-19 00:56:46.679318 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.679328 | orchestrator | 2026-03-19 00:56:46.679334 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 00:56:46.679341 | orchestrator | Thursday 19 March 2026 00:54:55 +0000 (0:00:00.810) 0:09:14.432 ******** 2026-03-19 00:56:46.679347 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.679353 | orchestrator | 2026-03-19 00:56:46.679359 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 00:56:46.679366 | orchestrator | Thursday 19 March 2026 00:54:56 +0000 (0:00:00.485) 0:09:14.917 ******** 2026-03-19 00:56:46.679372 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.679381 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.679388 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.679394 | orchestrator | 2026-03-19 00:56:46.679400 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 00:56:46.679405 | orchestrator | Thursday 19 March 2026 00:54:56 +0000 (0:00:00.414) 0:09:15.332 ******** 2026-03-19 00:56:46.679412 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.679418 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.679436 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.679444 | orchestrator | 2026-03-19 00:56:46.679450 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 00:56:46.679457 | orchestrator | Thursday 19 March 2026 00:54:57 +0000 (0:00:00.733) 0:09:16.066 ******** 2026-03-19 00:56:46.679463 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.679470 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.679477 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.679483 | orchestrator | 2026-03-19 00:56:46.679490 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 00:56:46.679496 | orchestrator | Thursday 19 March 2026 00:54:58 +0000 (0:00:00.668) 0:09:16.734 ******** 2026-03-19 00:56:46.679502 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.679509 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.679515 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.679521 | orchestrator | 2026-03-19 00:56:46.679527 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 00:56:46.679533 | orchestrator | Thursday 19 March 2026 00:54:58 +0000 (0:00:00.641) 0:09:17.375 ******** 2026-03-19 00:56:46.679539 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.679546 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.679552 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.679558 | orchestrator | 2026-03-19 00:56:46.679564 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 00:56:46.679571 | orchestrator | Thursday 19 March 2026 00:54:59 +0000 (0:00:00.420) 0:09:17.796 ******** 2026-03-19 00:56:46.679577 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.679583 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.679590 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.679596 | orchestrator | 2026-03-19 00:56:46.679612 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 00:56:46.679619 | orchestrator | Thursday 19 March 2026 00:54:59 +0000 (0:00:00.253) 0:09:18.049 ******** 2026-03-19 00:56:46.679626 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.679632 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.679639 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.679645 | orchestrator | 2026-03-19 00:56:46.679651 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 00:56:46.679658 | orchestrator | Thursday 19 March 2026 00:54:59 +0000 (0:00:00.251) 0:09:18.301 ******** 2026-03-19 00:56:46.679664 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.679671 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.679677 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.679683 | orchestrator | 2026-03-19 00:56:46.679689 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 00:56:46.679700 | orchestrator | Thursday 19 March 2026 00:55:00 +0000 (0:00:00.725) 0:09:19.026 ******** 2026-03-19 00:56:46.679707 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.679713 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.679719 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.679725 | orchestrator | 2026-03-19 00:56:46.679732 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 00:56:46.679739 | orchestrator | Thursday 19 March 2026 00:55:01 +0000 (0:00:00.885) 0:09:19.912 ******** 2026-03-19 00:56:46.679745 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.679751 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.679757 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.679764 | orchestrator | 2026-03-19 00:56:46.679770 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 00:56:46.679780 | orchestrator | Thursday 19 March 2026 00:55:01 +0000 (0:00:00.254) 0:09:20.167 ******** 2026-03-19 00:56:46.679786 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.679792 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.679797 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.679803 | orchestrator | 2026-03-19 00:56:46.679809 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 00:56:46.679814 | orchestrator | Thursday 19 March 2026 00:55:01 +0000 (0:00:00.249) 0:09:20.416 ******** 2026-03-19 00:56:46.679819 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.679825 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.679831 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.679837 | orchestrator | 2026-03-19 00:56:46.679843 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 00:56:46.679849 | orchestrator | Thursday 19 March 2026 00:55:02 +0000 (0:00:00.295) 0:09:20.712 ******** 2026-03-19 00:56:46.679855 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.679861 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.679868 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.679873 | orchestrator | 2026-03-19 00:56:46.679879 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 00:56:46.679885 | orchestrator | Thursday 19 March 2026 00:55:02 +0000 (0:00:00.457) 0:09:21.169 ******** 2026-03-19 00:56:46.679891 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.679896 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.679901 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.679907 | orchestrator | 2026-03-19 00:56:46.679914 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 00:56:46.679920 | orchestrator | Thursday 19 March 2026 00:55:02 +0000 (0:00:00.286) 0:09:21.456 ******** 2026-03-19 00:56:46.679926 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.679932 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.679937 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.679942 | orchestrator | 2026-03-19 00:56:46.679947 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 00:56:46.679953 | orchestrator | Thursday 19 March 2026 00:55:03 +0000 (0:00:00.262) 0:09:21.719 ******** 2026-03-19 00:56:46.679958 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.679963 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.679969 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.679975 | orchestrator | 2026-03-19 00:56:46.679987 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 00:56:46.679993 | orchestrator | Thursday 19 March 2026 00:55:03 +0000 (0:00:00.254) 0:09:21.973 ******** 2026-03-19 00:56:46.679998 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.680004 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.680010 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.680016 | orchestrator | 2026-03-19 00:56:46.680023 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 00:56:46.680029 | orchestrator | Thursday 19 March 2026 00:55:03 +0000 (0:00:00.424) 0:09:22.398 ******** 2026-03-19 00:56:46.680040 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.680046 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.680051 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.680056 | orchestrator | 2026-03-19 00:56:46.680062 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 00:56:46.680067 | orchestrator | Thursday 19 March 2026 00:55:04 +0000 (0:00:00.282) 0:09:22.680 ******** 2026-03-19 00:56:46.680073 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.680078 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.680084 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.680089 | orchestrator | 2026-03-19 00:56:46.680095 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-19 00:56:46.680101 | orchestrator | Thursday 19 March 2026 00:55:04 +0000 (0:00:00.464) 0:09:23.144 ******** 2026-03-19 00:56:46.680106 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.680112 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.680118 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-19 00:56:46.680124 | orchestrator | 2026-03-19 00:56:46.680129 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-19 00:56:46.680135 | orchestrator | Thursday 19 March 2026 00:55:05 +0000 (0:00:00.505) 0:09:23.649 ******** 2026-03-19 00:56:46.680140 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 00:56:46.680146 | orchestrator | 2026-03-19 00:56:46.680152 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-19 00:56:46.680158 | orchestrator | Thursday 19 March 2026 00:55:07 +0000 (0:00:01.976) 0:09:25.626 ******** 2026-03-19 00:56:46.680165 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-19 00:56:46.680173 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.680180 | orchestrator | 2026-03-19 00:56:46.680186 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-19 00:56:46.680193 | orchestrator | Thursday 19 March 2026 00:55:07 +0000 (0:00:00.201) 0:09:25.827 ******** 2026-03-19 00:56:46.680200 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-19 00:56:46.680211 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-19 00:56:46.680218 | orchestrator | 2026-03-19 00:56:46.680230 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-19 00:56:46.680237 | orchestrator | Thursday 19 March 2026 00:55:16 +0000 (0:00:08.906) 0:09:34.734 ******** 2026-03-19 00:56:46.680243 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 00:56:46.680250 | orchestrator | 2026-03-19 00:56:46.680256 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-19 00:56:46.680262 | orchestrator | Thursday 19 March 2026 00:55:20 +0000 (0:00:04.306) 0:09:39.041 ******** 2026-03-19 00:56:46.680268 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.680274 | orchestrator | 2026-03-19 00:56:46.680280 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-19 00:56:46.680286 | orchestrator | Thursday 19 March 2026 00:55:21 +0000 (0:00:00.663) 0:09:39.704 ******** 2026-03-19 00:56:46.680293 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-19 00:56:46.680299 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-19 00:56:46.680310 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-19 00:56:46.680316 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-19 00:56:46.680322 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-19 00:56:46.680329 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-19 00:56:46.680335 | orchestrator | 2026-03-19 00:56:46.680342 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-19 00:56:46.680348 | orchestrator | Thursday 19 March 2026 00:55:22 +0000 (0:00:01.148) 0:09:40.853 ******** 2026-03-19 00:56:46.680353 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:56:46.680360 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 00:56:46.680366 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 00:56:46.680372 | orchestrator | 2026-03-19 00:56:46.680379 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-19 00:56:46.680388 | orchestrator | Thursday 19 March 2026 00:55:24 +0000 (0:00:02.339) 0:09:43.192 ******** 2026-03-19 00:56:46.680394 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-19 00:56:46.680401 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 00:56:46.680407 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.680414 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-19 00:56:46.680420 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-19 00:56:46.680426 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.680432 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-19 00:56:46.680439 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-19 00:56:46.680445 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.680451 | orchestrator | 2026-03-19 00:56:46.680457 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-19 00:56:46.680464 | orchestrator | Thursday 19 March 2026 00:55:25 +0000 (0:00:01.113) 0:09:44.305 ******** 2026-03-19 00:56:46.680470 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.680477 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.680483 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.680489 | orchestrator | 2026-03-19 00:56:46.680495 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-19 00:56:46.680501 | orchestrator | Thursday 19 March 2026 00:55:28 +0000 (0:00:02.841) 0:09:47.147 ******** 2026-03-19 00:56:46.680508 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.680514 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.680520 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.680526 | orchestrator | 2026-03-19 00:56:46.680532 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-19 00:56:46.680539 | orchestrator | Thursday 19 March 2026 00:55:29 +0000 (0:00:00.326) 0:09:47.474 ******** 2026-03-19 00:56:46.680545 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.680551 | orchestrator | 2026-03-19 00:56:46.680558 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-19 00:56:46.680564 | orchestrator | Thursday 19 March 2026 00:55:29 +0000 (0:00:00.586) 0:09:48.060 ******** 2026-03-19 00:56:46.680570 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.680577 | orchestrator | 2026-03-19 00:56:46.680583 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-19 00:56:46.680589 | orchestrator | Thursday 19 March 2026 00:55:30 +0000 (0:00:00.809) 0:09:48.869 ******** 2026-03-19 00:56:46.680596 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.680639 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.680646 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.680663 | orchestrator | 2026-03-19 00:56:46.680669 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-19 00:56:46.680675 | orchestrator | Thursday 19 March 2026 00:55:31 +0000 (0:00:01.181) 0:09:50.051 ******** 2026-03-19 00:56:46.680681 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.680687 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.680693 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.680699 | orchestrator | 2026-03-19 00:56:46.680705 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-19 00:56:46.680712 | orchestrator | Thursday 19 March 2026 00:55:32 +0000 (0:00:01.112) 0:09:51.163 ******** 2026-03-19 00:56:46.680718 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.680724 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.680729 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.680734 | orchestrator | 2026-03-19 00:56:46.680739 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-19 00:56:46.680750 | orchestrator | Thursday 19 March 2026 00:55:34 +0000 (0:00:01.779) 0:09:52.943 ******** 2026-03-19 00:56:46.680756 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.680762 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.680767 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.680772 | orchestrator | 2026-03-19 00:56:46.680777 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-19 00:56:46.680783 | orchestrator | Thursday 19 March 2026 00:55:36 +0000 (0:00:02.014) 0:09:54.958 ******** 2026-03-19 00:56:46.680788 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.680794 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.680800 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.680805 | orchestrator | 2026-03-19 00:56:46.680811 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-19 00:56:46.680816 | orchestrator | Thursday 19 March 2026 00:55:37 +0000 (0:00:01.095) 0:09:56.053 ******** 2026-03-19 00:56:46.680822 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.680827 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.680832 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.680838 | orchestrator | 2026-03-19 00:56:46.680843 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-19 00:56:46.680848 | orchestrator | Thursday 19 March 2026 00:55:38 +0000 (0:00:00.921) 0:09:56.974 ******** 2026-03-19 00:56:46.680854 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.680859 | orchestrator | 2026-03-19 00:56:46.680865 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-19 00:56:46.680870 | orchestrator | Thursday 19 March 2026 00:55:38 +0000 (0:00:00.452) 0:09:57.427 ******** 2026-03-19 00:56:46.680876 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.680882 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.680887 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.680893 | orchestrator | 2026-03-19 00:56:46.680899 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-19 00:56:46.680905 | orchestrator | Thursday 19 March 2026 00:55:39 +0000 (0:00:00.275) 0:09:57.702 ******** 2026-03-19 00:56:46.680910 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.680916 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.680921 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.680926 | orchestrator | 2026-03-19 00:56:46.680936 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-19 00:56:46.680942 | orchestrator | Thursday 19 March 2026 00:55:40 +0000 (0:00:01.224) 0:09:58.926 ******** 2026-03-19 00:56:46.680948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:56:46.680954 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:56:46.680961 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:56:46.680967 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.680978 | orchestrator | 2026-03-19 00:56:46.680984 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-19 00:56:46.680991 | orchestrator | Thursday 19 March 2026 00:55:41 +0000 (0:00:00.642) 0:09:59.569 ******** 2026-03-19 00:56:46.680997 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.681003 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.681010 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.681016 | orchestrator | 2026-03-19 00:56:46.681022 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-19 00:56:46.681028 | orchestrator | 2026-03-19 00:56:46.681034 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 00:56:46.681041 | orchestrator | Thursday 19 March 2026 00:55:41 +0000 (0:00:00.497) 0:10:00.066 ******** 2026-03-19 00:56:46.681047 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.681053 | orchestrator | 2026-03-19 00:56:46.681059 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 00:56:46.681065 | orchestrator | Thursday 19 March 2026 00:55:42 +0000 (0:00:00.593) 0:10:00.659 ******** 2026-03-19 00:56:46.681071 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.681077 | orchestrator | 2026-03-19 00:56:46.681083 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 00:56:46.681089 | orchestrator | Thursday 19 March 2026 00:55:42 +0000 (0:00:00.461) 0:10:01.121 ******** 2026-03-19 00:56:46.681096 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.681103 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.681108 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.681112 | orchestrator | 2026-03-19 00:56:46.681115 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 00:56:46.681119 | orchestrator | Thursday 19 March 2026 00:55:43 +0000 (0:00:00.403) 0:10:01.524 ******** 2026-03-19 00:56:46.681123 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.681127 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.681130 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.681134 | orchestrator | 2026-03-19 00:56:46.681138 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 00:56:46.681141 | orchestrator | Thursday 19 March 2026 00:55:43 +0000 (0:00:00.645) 0:10:02.170 ******** 2026-03-19 00:56:46.681145 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.681149 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.681153 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.681156 | orchestrator | 2026-03-19 00:56:46.681160 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 00:56:46.681164 | orchestrator | Thursday 19 March 2026 00:55:44 +0000 (0:00:00.630) 0:10:02.800 ******** 2026-03-19 00:56:46.681167 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.681171 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.681175 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.681178 | orchestrator | 2026-03-19 00:56:46.681182 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 00:56:46.681186 | orchestrator | Thursday 19 March 2026 00:55:44 +0000 (0:00:00.600) 0:10:03.400 ******** 2026-03-19 00:56:46.681190 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.681198 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.681202 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.681206 | orchestrator | 2026-03-19 00:56:46.681210 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 00:56:46.681213 | orchestrator | Thursday 19 March 2026 00:55:45 +0000 (0:00:00.420) 0:10:03.822 ******** 2026-03-19 00:56:46.681217 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.681221 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.681225 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.681232 | orchestrator | 2026-03-19 00:56:46.681236 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 00:56:46.681239 | orchestrator | Thursday 19 March 2026 00:55:45 +0000 (0:00:00.283) 0:10:04.105 ******** 2026-03-19 00:56:46.681243 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.681247 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.681251 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.681254 | orchestrator | 2026-03-19 00:56:46.681258 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 00:56:46.681262 | orchestrator | Thursday 19 March 2026 00:55:45 +0000 (0:00:00.271) 0:10:04.377 ******** 2026-03-19 00:56:46.681265 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.681269 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.681273 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.681277 | orchestrator | 2026-03-19 00:56:46.681280 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 00:56:46.681284 | orchestrator | Thursday 19 March 2026 00:55:46 +0000 (0:00:00.656) 0:10:05.033 ******** 2026-03-19 00:56:46.681288 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.681292 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.681295 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.681299 | orchestrator | 2026-03-19 00:56:46.681303 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 00:56:46.681307 | orchestrator | Thursday 19 March 2026 00:55:47 +0000 (0:00:00.822) 0:10:05.856 ******** 2026-03-19 00:56:46.681310 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.681314 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.681318 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.681322 | orchestrator | 2026-03-19 00:56:46.681325 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 00:56:46.681331 | orchestrator | Thursday 19 March 2026 00:55:47 +0000 (0:00:00.256) 0:10:06.112 ******** 2026-03-19 00:56:46.681335 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.681339 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.681343 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.681350 | orchestrator | 2026-03-19 00:56:46.681355 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 00:56:46.681361 | orchestrator | Thursday 19 March 2026 00:55:47 +0000 (0:00:00.257) 0:10:06.370 ******** 2026-03-19 00:56:46.681366 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.681372 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.681377 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.681382 | orchestrator | 2026-03-19 00:56:46.681388 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 00:56:46.681394 | orchestrator | Thursday 19 March 2026 00:55:48 +0000 (0:00:00.291) 0:10:06.662 ******** 2026-03-19 00:56:46.681399 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.681405 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.681411 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.681417 | orchestrator | 2026-03-19 00:56:46.681424 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 00:56:46.681428 | orchestrator | Thursday 19 March 2026 00:55:48 +0000 (0:00:00.492) 0:10:07.154 ******** 2026-03-19 00:56:46.681432 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.681435 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.681439 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.681443 | orchestrator | 2026-03-19 00:56:46.681446 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 00:56:46.681450 | orchestrator | Thursday 19 March 2026 00:55:48 +0000 (0:00:00.299) 0:10:07.454 ******** 2026-03-19 00:56:46.681454 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.681458 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.681461 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.681465 | orchestrator | 2026-03-19 00:56:46.681469 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 00:56:46.681476 | orchestrator | Thursday 19 March 2026 00:55:49 +0000 (0:00:00.240) 0:10:07.694 ******** 2026-03-19 00:56:46.681480 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.681484 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.681488 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.681495 | orchestrator | 2026-03-19 00:56:46.681501 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 00:56:46.681506 | orchestrator | Thursday 19 March 2026 00:55:49 +0000 (0:00:00.276) 0:10:07.971 ******** 2026-03-19 00:56:46.681513 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.681519 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.681526 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.681532 | orchestrator | 2026-03-19 00:56:46.681538 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 00:56:46.681542 | orchestrator | Thursday 19 March 2026 00:55:49 +0000 (0:00:00.428) 0:10:08.399 ******** 2026-03-19 00:56:46.681546 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.681550 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.681557 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.681562 | orchestrator | 2026-03-19 00:56:46.681569 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 00:56:46.681575 | orchestrator | Thursday 19 March 2026 00:55:50 +0000 (0:00:00.429) 0:10:08.829 ******** 2026-03-19 00:56:46.681582 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.681588 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.681592 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.681595 | orchestrator | 2026-03-19 00:56:46.681599 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-19 00:56:46.681619 | orchestrator | Thursday 19 March 2026 00:55:50 +0000 (0:00:00.548) 0:10:09.378 ******** 2026-03-19 00:56:46.681629 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.681635 | orchestrator | 2026-03-19 00:56:46.681641 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-19 00:56:46.681647 | orchestrator | Thursday 19 March 2026 00:55:51 +0000 (0:00:00.706) 0:10:10.084 ******** 2026-03-19 00:56:46.681653 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:56:46.681658 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 00:56:46.681664 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 00:56:46.681670 | orchestrator | 2026-03-19 00:56:46.681676 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-19 00:56:46.681684 | orchestrator | Thursday 19 March 2026 00:55:53 +0000 (0:00:02.244) 0:10:12.329 ******** 2026-03-19 00:56:46.681691 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-19 00:56:46.681698 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 00:56:46.681703 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.681707 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-19 00:56:46.681710 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-19 00:56:46.681714 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.681718 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-19 00:56:46.681722 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-19 00:56:46.681725 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.681729 | orchestrator | 2026-03-19 00:56:46.681733 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-19 00:56:46.681736 | orchestrator | Thursday 19 March 2026 00:55:55 +0000 (0:00:01.436) 0:10:13.765 ******** 2026-03-19 00:56:46.681740 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.681744 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.681747 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.681751 | orchestrator | 2026-03-19 00:56:46.681755 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-19 00:56:46.681762 | orchestrator | Thursday 19 March 2026 00:55:55 +0000 (0:00:00.296) 0:10:14.062 ******** 2026-03-19 00:56:46.681766 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.681771 | orchestrator | 2026-03-19 00:56:46.681781 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-19 00:56:46.681787 | orchestrator | Thursday 19 March 2026 00:55:56 +0000 (0:00:00.729) 0:10:14.792 ******** 2026-03-19 00:56:46.681794 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 00:56:46.681802 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 00:56:46.681806 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 00:56:46.681810 | orchestrator | 2026-03-19 00:56:46.681814 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-19 00:56:46.681817 | orchestrator | Thursday 19 March 2026 00:55:57 +0000 (0:00:00.847) 0:10:15.639 ******** 2026-03-19 00:56:46.681821 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:56:46.681825 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-19 00:56:46.681829 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:56:46.681832 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-19 00:56:46.681836 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:56:46.681840 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-19 00:56:46.681843 | orchestrator | 2026-03-19 00:56:46.681847 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-19 00:56:46.681851 | orchestrator | Thursday 19 March 2026 00:56:01 +0000 (0:00:04.595) 0:10:20.234 ******** 2026-03-19 00:56:46.681855 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:56:46.681858 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 00:56:46.681862 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:56:46.681866 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 00:56:46.681873 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:56:46.681879 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 00:56:46.681885 | orchestrator | 2026-03-19 00:56:46.681891 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-19 00:56:46.681897 | orchestrator | Thursday 19 March 2026 00:56:04 +0000 (0:00:02.685) 0:10:22.920 ******** 2026-03-19 00:56:46.681904 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-19 00:56:46.681911 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.681915 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-19 00:56:46.681919 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.681923 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-19 00:56:46.681927 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.681930 | orchestrator | 2026-03-19 00:56:46.681937 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-19 00:56:46.681941 | orchestrator | Thursday 19 March 2026 00:56:05 +0000 (0:00:01.170) 0:10:24.090 ******** 2026-03-19 00:56:46.681945 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-19 00:56:46.681952 | orchestrator | 2026-03-19 00:56:46.681956 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-19 00:56:46.681959 | orchestrator | Thursday 19 March 2026 00:56:05 +0000 (0:00:00.222) 0:10:24.313 ******** 2026-03-19 00:56:46.681963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 00:56:46.681967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 00:56:46.681971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 00:56:46.681975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 00:56:46.681979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 00:56:46.681982 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.681986 | orchestrator | 2026-03-19 00:56:46.681990 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-19 00:56:46.681994 | orchestrator | Thursday 19 March 2026 00:56:06 +0000 (0:00:00.605) 0:10:24.918 ******** 2026-03-19 00:56:46.681997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 00:56:46.682004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 00:56:46.682008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 00:56:46.682033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 00:56:46.682038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 00:56:46.682042 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.682046 | orchestrator | 2026-03-19 00:56:46.682050 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-19 00:56:46.682054 | orchestrator | Thursday 19 March 2026 00:56:07 +0000 (0:00:00.565) 0:10:25.483 ******** 2026-03-19 00:56:46.682058 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 00:56:46.682062 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 00:56:46.682065 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 00:56:46.682069 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 00:56:46.682073 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 00:56:46.682077 | orchestrator | 2026-03-19 00:56:46.682081 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-19 00:56:46.682085 | orchestrator | Thursday 19 March 2026 00:56:33 +0000 (0:00:26.687) 0:10:52.171 ******** 2026-03-19 00:56:46.682089 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.682092 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.682096 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.682100 | orchestrator | 2026-03-19 00:56:46.682104 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-19 00:56:46.682113 | orchestrator | Thursday 19 March 2026 00:56:33 +0000 (0:00:00.254) 0:10:52.425 ******** 2026-03-19 00:56:46.682117 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.682120 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.682124 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.682128 | orchestrator | 2026-03-19 00:56:46.682132 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-19 00:56:46.682136 | orchestrator | Thursday 19 March 2026 00:56:34 +0000 (0:00:00.434) 0:10:52.860 ******** 2026-03-19 00:56:46.682139 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.682143 | orchestrator | 2026-03-19 00:56:46.682147 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-19 00:56:46.682151 | orchestrator | Thursday 19 March 2026 00:56:34 +0000 (0:00:00.444) 0:10:53.305 ******** 2026-03-19 00:56:46.682158 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.682162 | orchestrator | 2026-03-19 00:56:46.682166 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-19 00:56:46.682169 | orchestrator | Thursday 19 March 2026 00:56:35 +0000 (0:00:00.563) 0:10:53.868 ******** 2026-03-19 00:56:46.682173 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.682180 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.682186 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.682191 | orchestrator | 2026-03-19 00:56:46.682197 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-19 00:56:46.682204 | orchestrator | Thursday 19 March 2026 00:56:36 +0000 (0:00:01.117) 0:10:54.986 ******** 2026-03-19 00:56:46.682211 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.682215 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.682219 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.682223 | orchestrator | 2026-03-19 00:56:46.682227 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-19 00:56:46.682230 | orchestrator | Thursday 19 March 2026 00:56:37 +0000 (0:00:01.001) 0:10:55.988 ******** 2026-03-19 00:56:46.682234 | orchestrator | changed: [testbed-node-3] 2026-03-19 00:56:46.682238 | orchestrator | changed: [testbed-node-5] 2026-03-19 00:56:46.682241 | orchestrator | changed: [testbed-node-4] 2026-03-19 00:56:46.682245 | orchestrator | 2026-03-19 00:56:46.682249 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-19 00:56:46.682253 | orchestrator | Thursday 19 March 2026 00:56:39 +0000 (0:00:01.772) 0:10:57.760 ******** 2026-03-19 00:56:46.682256 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 00:56:46.682260 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 00:56:46.682264 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 00:56:46.682268 | orchestrator | 2026-03-19 00:56:46.682274 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-19 00:56:46.682280 | orchestrator | Thursday 19 March 2026 00:56:42 +0000 (0:00:02.767) 0:11:00.528 ******** 2026-03-19 00:56:46.682286 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.682292 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.682298 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.682303 | orchestrator | 2026-03-19 00:56:46.682309 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-19 00:56:46.682315 | orchestrator | Thursday 19 March 2026 00:56:42 +0000 (0:00:00.270) 0:11:00.799 ******** 2026-03-19 00:56:46.682322 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:56:46.682333 | orchestrator | 2026-03-19 00:56:46.682340 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-19 00:56:46.682346 | orchestrator | Thursday 19 March 2026 00:56:42 +0000 (0:00:00.591) 0:11:01.390 ******** 2026-03-19 00:56:46.682352 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.682356 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.682359 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.682363 | orchestrator | 2026-03-19 00:56:46.682367 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-19 00:56:46.682371 | orchestrator | Thursday 19 March 2026 00:56:43 +0000 (0:00:00.265) 0:11:01.655 ******** 2026-03-19 00:56:46.682374 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.682378 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:56:46.682382 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:56:46.682386 | orchestrator | 2026-03-19 00:56:46.682389 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-19 00:56:46.682393 | orchestrator | Thursday 19 March 2026 00:56:43 +0000 (0:00:00.275) 0:11:01.931 ******** 2026-03-19 00:56:46.682400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:56:46.682406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:56:46.682412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:56:46.682418 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:56:46.682425 | orchestrator | 2026-03-19 00:56:46.682431 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-19 00:56:46.682438 | orchestrator | Thursday 19 March 2026 00:56:44 +0000 (0:00:00.900) 0:11:02.832 ******** 2026-03-19 00:56:46.682445 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:56:46.682451 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:56:46.682457 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:56:46.682460 | orchestrator | 2026-03-19 00:56:46.682464 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:56:46.682468 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-19 00:56:46.682472 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-19 00:56:46.682476 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-19 00:56:46.682480 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-19 00:56:46.682484 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-19 00:56:46.682490 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-19 00:56:46.682494 | orchestrator | 2026-03-19 00:56:46.682498 | orchestrator | 2026-03-19 00:56:46.682502 | orchestrator | 2026-03-19 00:56:46.682506 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:56:46.682509 | orchestrator | Thursday 19 March 2026 00:56:44 +0000 (0:00:00.242) 0:11:03.074 ******** 2026-03-19 00:56:46.682513 | orchestrator | =============================================================================== 2026-03-19 00:56:46.682517 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 53.39s 2026-03-19 00:56:46.682520 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.70s 2026-03-19 00:56:46.682576 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.69s 2026-03-19 00:56:46.682591 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 26.69s 2026-03-19 00:56:46.682599 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.23s 2026-03-19 00:56:46.682616 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.44s 2026-03-19 00:56:46.682620 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.81s 2026-03-19 00:56:46.682623 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.67s 2026-03-19 00:56:46.682627 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.51s 2026-03-19 00:56:46.682631 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.91s 2026-03-19 00:56:46.682635 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.32s 2026-03-19 00:56:46.682638 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.42s 2026-03-19 00:56:46.682642 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.84s 2026-03-19 00:56:46.682646 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.60s 2026-03-19 00:56:46.682652 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.55s 2026-03-19 00:56:46.682656 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.31s 2026-03-19 00:56:46.682659 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.96s 2026-03-19 00:56:46.682663 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.83s 2026-03-19 00:56:46.682667 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.78s 2026-03-19 00:56:46.682671 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.38s 2026-03-19 00:56:46.682674 | orchestrator | 2026-03-19 00:56:46 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:46.682678 | orchestrator | 2026-03-19 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:49.695340 | orchestrator | 2026-03-19 00:56:49 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:56:49.696954 | orchestrator | 2026-03-19 00:56:49 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state STARTED 2026-03-19 00:56:49.698142 | orchestrator | 2026-03-19 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:52.734932 | orchestrator | 2026-03-19 00:56:52 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:56:52.742161 | orchestrator | 2026-03-19 00:56:52 | INFO  | Task a7949153-4022-496a-a4b3-991828df50e4 is in state SUCCESS 2026-03-19 00:56:52.743261 | orchestrator | 2026-03-19 00:56:52.743331 | orchestrator | 2026-03-19 00:56:52.743341 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-19 00:56:52.743349 | orchestrator | 2026-03-19 00:56:52.743511 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-19 00:56:52.743522 | orchestrator | Thursday 19 March 2026 00:54:02 +0000 (0:00:00.096) 0:00:00.096 ******** 2026-03-19 00:56:52.743528 | orchestrator | ok: [localhost] => { 2026-03-19 00:56:52.743537 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-19 00:56:52.743545 | orchestrator | } 2026-03-19 00:56:52.743552 | orchestrator | 2026-03-19 00:56:52.743559 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-19 00:56:52.743566 | orchestrator | Thursday 19 March 2026 00:54:02 +0000 (0:00:00.051) 0:00:00.147 ******** 2026-03-19 00:56:52.743573 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-19 00:56:52.743581 | orchestrator | ...ignoring 2026-03-19 00:56:52.743607 | orchestrator | 2026-03-19 00:56:52.743614 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-19 00:56:52.743619 | orchestrator | Thursday 19 March 2026 00:54:05 +0000 (0:00:02.916) 0:00:03.063 ******** 2026-03-19 00:56:52.743657 | orchestrator | skipping: [localhost] 2026-03-19 00:56:52.743663 | orchestrator | 2026-03-19 00:56:52.743669 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-19 00:56:52.743675 | orchestrator | Thursday 19 March 2026 00:54:05 +0000 (0:00:00.050) 0:00:03.114 ******** 2026-03-19 00:56:52.743682 | orchestrator | ok: [localhost] 2026-03-19 00:56:52.743688 | orchestrator | 2026-03-19 00:56:52.743695 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 00:56:52.743701 | orchestrator | 2026-03-19 00:56:52.743708 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 00:56:52.743714 | orchestrator | Thursday 19 March 2026 00:54:05 +0000 (0:00:00.230) 0:00:03.344 ******** 2026-03-19 00:56:52.743721 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:52.743727 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:52.743734 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:52.743740 | orchestrator | 2026-03-19 00:56:52.743747 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 00:56:52.743753 | orchestrator | Thursday 19 March 2026 00:54:05 +0000 (0:00:00.315) 0:00:03.659 ******** 2026-03-19 00:56:52.743759 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-19 00:56:52.743767 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-19 00:56:52.743773 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-19 00:56:52.743780 | orchestrator | 2026-03-19 00:56:52.743786 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-19 00:56:52.743792 | orchestrator | 2026-03-19 00:56:52.743799 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-19 00:56:52.743806 | orchestrator | Thursday 19 March 2026 00:54:06 +0000 (0:00:00.379) 0:00:04.039 ******** 2026-03-19 00:56:52.743812 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 00:56:52.743819 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-19 00:56:52.743824 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-19 00:56:52.743830 | orchestrator | 2026-03-19 00:56:52.743836 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-19 00:56:52.743841 | orchestrator | Thursday 19 March 2026 00:54:06 +0000 (0:00:00.350) 0:00:04.390 ******** 2026-03-19 00:56:52.743848 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:52.743855 | orchestrator | 2026-03-19 00:56:52.743861 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-19 00:56:52.743867 | orchestrator | Thursday 19 March 2026 00:54:07 +0000 (0:00:00.626) 0:00:05.016 ******** 2026-03-19 00:56:52.743912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 00:56:52.743933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 00:56:52.743944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 00:56:52.743957 | orchestrator | 2026-03-19 00:56:52.743969 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-19 00:56:52.743976 | orchestrator | Thursday 19 March 2026 00:54:10 +0000 (0:00:03.204) 0:00:08.221 ******** 2026-03-19 00:56:52.743982 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.743989 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.743995 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:52.744001 | orchestrator | 2026-03-19 00:56:52.744007 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-19 00:56:52.744014 | orchestrator | Thursday 19 March 2026 00:54:10 +0000 (0:00:00.579) 0:00:08.800 ******** 2026-03-19 00:56:52.744020 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.744026 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.744032 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:52.744037 | orchestrator | 2026-03-19 00:56:52.744043 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-19 00:56:52.744049 | orchestrator | Thursday 19 March 2026 00:54:12 +0000 (0:00:01.396) 0:00:10.197 ******** 2026-03-19 00:56:52.744055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 00:56:52.744071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 00:56:52.744085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 00:56:52.744092 | orchestrator | 2026-03-19 00:56:52.744099 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-19 00:56:52.744105 | orchestrator | Thursday 19 March 2026 00:54:16 +0000 (0:00:04.129) 0:00:14.326 ******** 2026-03-19 00:56:52.744111 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.744117 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.744123 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:52.744129 | orchestrator | 2026-03-19 00:56:52.744135 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-19 00:56:52.744141 | orchestrator | Thursday 19 March 2026 00:54:17 +0000 (0:00:01.105) 0:00:15.432 ******** 2026-03-19 00:56:52.744147 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:52.744153 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:52.744159 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:52.744165 | orchestrator | 2026-03-19 00:56:52.744171 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-19 00:56:52.744176 | orchestrator | Thursday 19 March 2026 00:54:22 +0000 (0:00:04.660) 0:00:20.093 ******** 2026-03-19 00:56:52.744183 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:52.744190 | orchestrator | 2026-03-19 00:56:52.744199 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-19 00:56:52.744262 | orchestrator | Thursday 19 March 2026 00:54:23 +0000 (0:00:01.069) 0:00:21.162 ******** 2026-03-19 00:56:52.744279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 00:56:52.744287 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:52.744295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 00:56:52.744301 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.744316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 00:56:52.744330 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.744336 | orchestrator | 2026-03-19 00:56:52.744342 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-19 00:56:52.744348 | orchestrator | Thursday 19 March 2026 00:54:26 +0000 (0:00:03.171) 0:00:24.334 ******** 2026-03-19 00:56:52.744355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 00:56:52.744361 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:52.744376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 00:56:52.744497 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.744510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 00:56:52.744518 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.744524 | orchestrator | 2026-03-19 00:56:52.744531 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-19 00:56:52.744538 | orchestrator | Thursday 19 March 2026 00:54:29 +0000 (0:00:03.120) 0:00:27.454 ******** 2026-03-19 00:56:52.744557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 00:56:52.744572 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:52.744583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 00:56:52.744613 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.744624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 00:56:52.744636 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.744643 | orchestrator | 2026-03-19 00:56:52.744648 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-19 00:56:52.744654 | orchestrator | Thursday 19 March 2026 00:54:33 +0000 (0:00:03.532) 0:00:30.986 ******** 2026-03-19 00:56:52.744665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 00:56:52.744677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 00:56:52.744694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 00:56:52.744701 | orchestrator | 2026-03-19 00:56:52.744707 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-19 00:56:52.744712 | orchestrator | Thursday 19 March 2026 00:54:36 +0000 (0:00:03.424) 0:00:34.411 ******** 2026-03-19 00:56:52.744719 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:52.744724 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:52.744730 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:52.744736 | orchestrator | 2026-03-19 00:56:52.744741 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-19 00:56:52.744747 | orchestrator | Thursday 19 March 2026 00:54:37 +0000 (0:00:00.859) 0:00:35.271 ******** 2026-03-19 00:56:52.744754 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:52.744760 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:52.744765 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:52.744775 | orchestrator | 2026-03-19 00:56:52.744781 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-19 00:56:52.744787 | orchestrator | Thursday 19 March 2026 00:54:37 +0000 (0:00:00.306) 0:00:35.578 ******** 2026-03-19 00:56:52.744792 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:52.744798 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:52.744804 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:52.744809 | orchestrator | 2026-03-19 00:56:52.744815 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-19 00:56:52.744820 | orchestrator | Thursday 19 March 2026 00:54:37 +0000 (0:00:00.288) 0:00:35.866 ******** 2026-03-19 00:56:52.744827 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-19 00:56:52.744834 | orchestrator | ...ignoring 2026-03-19 00:56:52.744840 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-19 00:56:52.744846 | orchestrator | ...ignoring 2026-03-19 00:56:52.744852 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-19 00:56:52.744857 | orchestrator | ...ignoring 2026-03-19 00:56:52.744863 | orchestrator | 2026-03-19 00:56:52.744868 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-19 00:56:52.744875 | orchestrator | Thursday 19 March 2026 00:54:48 +0000 (0:00:10.906) 0:00:46.773 ******** 2026-03-19 00:56:52.744881 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:52.744887 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:52.744893 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:52.744900 | orchestrator | 2026-03-19 00:56:52.744911 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-19 00:56:52.744918 | orchestrator | Thursday 19 March 2026 00:54:49 +0000 (0:00:00.391) 0:00:47.164 ******** 2026-03-19 00:56:52.744924 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:52.744930 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.744937 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.744943 | orchestrator | 2026-03-19 00:56:52.744949 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-19 00:56:52.744955 | orchestrator | Thursday 19 March 2026 00:54:49 +0000 (0:00:00.404) 0:00:47.568 ******** 2026-03-19 00:56:52.744961 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:52.744968 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.744974 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.744979 | orchestrator | 2026-03-19 00:56:52.744985 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-19 00:56:52.744991 | orchestrator | Thursday 19 March 2026 00:54:50 +0000 (0:00:00.481) 0:00:48.050 ******** 2026-03-19 00:56:52.744996 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:52.745002 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.745009 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.745016 | orchestrator | 2026-03-19 00:56:52.745022 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-19 00:56:52.745028 | orchestrator | Thursday 19 March 2026 00:54:50 +0000 (0:00:00.616) 0:00:48.667 ******** 2026-03-19 00:56:52.745034 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:52.745040 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:52.745046 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:52.745051 | orchestrator | 2026-03-19 00:56:52.745057 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-19 00:56:52.745063 | orchestrator | Thursday 19 March 2026 00:54:51 +0000 (0:00:00.414) 0:00:49.081 ******** 2026-03-19 00:56:52.745074 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:52.745080 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.745086 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.745098 | orchestrator | 2026-03-19 00:56:52.745104 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-19 00:56:52.745111 | orchestrator | Thursday 19 March 2026 00:54:51 +0000 (0:00:00.408) 0:00:49.489 ******** 2026-03-19 00:56:52.745117 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.745123 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.745149 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-19 00:56:52.745156 | orchestrator | 2026-03-19 00:56:52.745162 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-19 00:56:52.745167 | orchestrator | Thursday 19 March 2026 00:54:51 +0000 (0:00:00.382) 0:00:49.871 ******** 2026-03-19 00:56:52.745173 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:52.745178 | orchestrator | 2026-03-19 00:56:52.745184 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-19 00:56:52.745189 | orchestrator | Thursday 19 March 2026 00:55:02 +0000 (0:00:10.123) 0:00:59.995 ******** 2026-03-19 00:56:52.745194 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:52.745200 | orchestrator | 2026-03-19 00:56:52.745206 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-19 00:56:52.745211 | orchestrator | Thursday 19 March 2026 00:55:02 +0000 (0:00:00.184) 0:01:00.179 ******** 2026-03-19 00:56:52.745217 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:52.745222 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.745228 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.745234 | orchestrator | 2026-03-19 00:56:52.745239 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-19 00:56:52.745246 | orchestrator | Thursday 19 March 2026 00:55:02 +0000 (0:00:00.704) 0:01:00.884 ******** 2026-03-19 00:56:52.745252 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:52.745258 | orchestrator | 2026-03-19 00:56:52.745265 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-19 00:56:52.745270 | orchestrator | Thursday 19 March 2026 00:55:10 +0000 (0:00:07.268) 0:01:08.153 ******** 2026-03-19 00:56:52.745276 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:52.745281 | orchestrator | 2026-03-19 00:56:52.745286 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-19 00:56:52.745293 | orchestrator | Thursday 19 March 2026 00:55:11 +0000 (0:00:01.637) 0:01:09.790 ******** 2026-03-19 00:56:52.745298 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:52.745304 | orchestrator | 2026-03-19 00:56:52.745309 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-19 00:56:52.745315 | orchestrator | Thursday 19 March 2026 00:55:14 +0000 (0:00:02.470) 0:01:12.260 ******** 2026-03-19 00:56:52.745322 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:52.745327 | orchestrator | 2026-03-19 00:56:52.745334 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-19 00:56:52.745340 | orchestrator | Thursday 19 March 2026 00:55:14 +0000 (0:00:00.215) 0:01:12.476 ******** 2026-03-19 00:56:52.745347 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:52.745352 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.745358 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.745364 | orchestrator | 2026-03-19 00:56:52.745371 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-19 00:56:52.745376 | orchestrator | Thursday 19 March 2026 00:55:14 +0000 (0:00:00.277) 0:01:12.754 ******** 2026-03-19 00:56:52.745382 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:52.745388 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:52.745394 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:52.745399 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-19 00:56:52.745405 | orchestrator | 2026-03-19 00:56:52.745411 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-19 00:56:52.745417 | orchestrator | skipping: no hosts matched 2026-03-19 00:56:52.745422 | orchestrator | 2026-03-19 00:56:52.745428 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-19 00:56:52.745441 | orchestrator | 2026-03-19 00:56:52.745447 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-19 00:56:52.745460 | orchestrator | Thursday 19 March 2026 00:55:15 +0000 (0:00:00.313) 0:01:13.067 ******** 2026-03-19 00:56:52.745464 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:56:52.745468 | orchestrator | 2026-03-19 00:56:52.745472 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-19 00:56:52.745475 | orchestrator | Thursday 19 March 2026 00:55:31 +0000 (0:00:16.282) 0:01:29.350 ******** 2026-03-19 00:56:52.745479 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:52.745483 | orchestrator | 2026-03-19 00:56:52.745486 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-19 00:56:52.745490 | orchestrator | Thursday 19 March 2026 00:55:45 +0000 (0:00:14.495) 0:01:43.845 ******** 2026-03-19 00:56:52.745494 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:52.745498 | orchestrator | 2026-03-19 00:56:52.745501 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-19 00:56:52.745507 | orchestrator | 2026-03-19 00:56:52.745512 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-19 00:56:52.745518 | orchestrator | Thursday 19 March 2026 00:55:47 +0000 (0:00:02.122) 0:01:45.967 ******** 2026-03-19 00:56:52.745524 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:56:52.745530 | orchestrator | 2026-03-19 00:56:52.745536 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-19 00:56:52.745542 | orchestrator | Thursday 19 March 2026 00:56:04 +0000 (0:00:16.718) 0:02:02.686 ******** 2026-03-19 00:56:52.745549 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-03-19 00:56:52.745555 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:52.745559 | orchestrator | 2026-03-19 00:56:52.745562 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-19 00:56:52.745566 | orchestrator | Thursday 19 March 2026 00:56:20 +0000 (0:00:15.798) 0:02:18.484 ******** 2026-03-19 00:56:52.745576 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:52.745580 | orchestrator | 2026-03-19 00:56:52.745584 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-19 00:56:52.745588 | orchestrator | 2026-03-19 00:56:52.745684 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-19 00:56:52.745688 | orchestrator | Thursday 19 March 2026 00:56:22 +0000 (0:00:02.475) 0:02:20.960 ******** 2026-03-19 00:56:52.745691 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:52.745695 | orchestrator | 2026-03-19 00:56:52.745699 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-19 00:56:52.745703 | orchestrator | Thursday 19 March 2026 00:56:33 +0000 (0:00:10.121) 0:02:31.081 ******** 2026-03-19 00:56:52.745706 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:52.745710 | orchestrator | 2026-03-19 00:56:52.745714 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-19 00:56:52.745717 | orchestrator | Thursday 19 March 2026 00:56:36 +0000 (0:00:03.488) 0:02:34.569 ******** 2026-03-19 00:56:52.745721 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:52.745725 | orchestrator | 2026-03-19 00:56:52.745728 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-19 00:56:52.745732 | orchestrator | 2026-03-19 00:56:52.745736 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-19 00:56:52.745739 | orchestrator | Thursday 19 March 2026 00:56:38 +0000 (0:00:02.358) 0:02:36.927 ******** 2026-03-19 00:56:52.745743 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:56:52.745747 | orchestrator | 2026-03-19 00:56:52.745750 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-19 00:56:52.745754 | orchestrator | Thursday 19 March 2026 00:56:39 +0000 (0:00:00.645) 0:02:37.573 ******** 2026-03-19 00:56:52.745758 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.745767 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.745770 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:52.745774 | orchestrator | 2026-03-19 00:56:52.745778 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-19 00:56:52.745782 | orchestrator | Thursday 19 March 2026 00:56:41 +0000 (0:00:02.256) 0:02:39.829 ******** 2026-03-19 00:56:52.745785 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.745789 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.745793 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:52.745797 | orchestrator | 2026-03-19 00:56:52.745800 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-19 00:56:52.745804 | orchestrator | Thursday 19 March 2026 00:56:43 +0000 (0:00:02.064) 0:02:41.893 ******** 2026-03-19 00:56:52.745808 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.745811 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.745815 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:52.745819 | orchestrator | 2026-03-19 00:56:52.745825 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-19 00:56:52.745831 | orchestrator | Thursday 19 March 2026 00:56:45 +0000 (0:00:01.969) 0:02:43.862 ******** 2026-03-19 00:56:52.745838 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.745848 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.745854 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:56:52.745860 | orchestrator | 2026-03-19 00:56:52.745865 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-19 00:56:52.745871 | orchestrator | Thursday 19 March 2026 00:56:48 +0000 (0:00:02.312) 0:02:46.175 ******** 2026-03-19 00:56:52.745876 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:56:52.745882 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:56:52.745888 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:56:52.745895 | orchestrator | 2026-03-19 00:56:52.745901 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-19 00:56:52.745907 | orchestrator | Thursday 19 March 2026 00:56:50 +0000 (0:00:02.705) 0:02:48.881 ******** 2026-03-19 00:56:52.745913 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:56:52.745919 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:56:52.745925 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:56:52.745929 | orchestrator | 2026-03-19 00:56:52.745933 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:56:52.745943 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-19 00:56:52.745948 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-19 00:56:52.745953 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-19 00:56:52.745956 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-19 00:56:52.745960 | orchestrator | 2026-03-19 00:56:52.745964 | orchestrator | 2026-03-19 00:56:52.745968 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:56:52.745971 | orchestrator | Thursday 19 March 2026 00:56:51 +0000 (0:00:00.189) 0:02:49.070 ******** 2026-03-19 00:56:52.745976 | orchestrator | =============================================================================== 2026-03-19 00:56:52.745982 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.00s 2026-03-19 00:56:52.745988 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 30.29s 2026-03-19 00:56:52.745994 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.91s 2026-03-19 00:56:52.746002 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.12s 2026-03-19 00:56:52.746063 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.12s 2026-03-19 00:56:52.746080 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.27s 2026-03-19 00:56:52.746087 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.66s 2026-03-19 00:56:52.746095 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.60s 2026-03-19 00:56:52.746102 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.13s 2026-03-19 00:56:52.746107 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.53s 2026-03-19 00:56:52.746111 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 3.49s 2026-03-19 00:56:52.746115 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.42s 2026-03-19 00:56:52.746119 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.20s 2026-03-19 00:56:52.746122 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.17s 2026-03-19 00:56:52.746126 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.12s 2026-03-19 00:56:52.746131 | orchestrator | Check MariaDB service --------------------------------------------------- 2.92s 2026-03-19 00:56:52.746134 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.71s 2026-03-19 00:56:52.746138 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.47s 2026-03-19 00:56:52.746142 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.36s 2026-03-19 00:56:52.746146 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.31s 2026-03-19 00:56:52.746149 | orchestrator | 2026-03-19 00:56:52 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:56:52.746154 | orchestrator | 2026-03-19 00:56:52 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:56:52.746158 | orchestrator | 2026-03-19 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:55.775241 | orchestrator | 2026-03-19 00:56:55 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:56:55.775562 | orchestrator | 2026-03-19 00:56:55 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:56:55.776514 | orchestrator | 2026-03-19 00:56:55 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:56:55.776562 | orchestrator | 2026-03-19 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:56:58.803200 | orchestrator | 2026-03-19 00:56:58 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:56:58.803312 | orchestrator | 2026-03-19 00:56:58 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:56:58.804224 | orchestrator | 2026-03-19 00:56:58 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:56:58.804248 | orchestrator | 2026-03-19 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:01.833777 | orchestrator | 2026-03-19 00:57:01 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:01.834401 | orchestrator | 2026-03-19 00:57:01 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:01.835366 | orchestrator | 2026-03-19 00:57:01 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:01.835411 | orchestrator | 2026-03-19 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:04.875203 | orchestrator | 2026-03-19 00:57:04 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:04.876124 | orchestrator | 2026-03-19 00:57:04 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:04.878158 | orchestrator | 2026-03-19 00:57:04 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:04.878196 | orchestrator | 2026-03-19 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:07.936606 | orchestrator | 2026-03-19 00:57:07 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:07.936666 | orchestrator | 2026-03-19 00:57:07 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:07.936675 | orchestrator | 2026-03-19 00:57:07 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:07.936681 | orchestrator | 2026-03-19 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:10.953726 | orchestrator | 2026-03-19 00:57:10 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:10.953937 | orchestrator | 2026-03-19 00:57:10 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:10.954970 | orchestrator | 2026-03-19 00:57:10 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:10.955032 | orchestrator | 2026-03-19 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:13.980169 | orchestrator | 2026-03-19 00:57:13 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:13.981468 | orchestrator | 2026-03-19 00:57:13 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:13.983411 | orchestrator | 2026-03-19 00:57:13 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:13.983586 | orchestrator | 2026-03-19 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:17.013054 | orchestrator | 2026-03-19 00:57:17 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:17.013637 | orchestrator | 2026-03-19 00:57:17 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:17.014490 | orchestrator | 2026-03-19 00:57:17 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:17.014519 | orchestrator | 2026-03-19 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:20.056499 | orchestrator | 2026-03-19 00:57:20 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:20.057049 | orchestrator | 2026-03-19 00:57:20 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:20.058398 | orchestrator | 2026-03-19 00:57:20 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:20.058443 | orchestrator | 2026-03-19 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:23.098821 | orchestrator | 2026-03-19 00:57:23 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:23.100271 | orchestrator | 2026-03-19 00:57:23 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:23.102822 | orchestrator | 2026-03-19 00:57:23 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:23.102883 | orchestrator | 2026-03-19 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:26.139989 | orchestrator | 2026-03-19 00:57:26 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:26.142305 | orchestrator | 2026-03-19 00:57:26 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:26.144472 | orchestrator | 2026-03-19 00:57:26 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:26.144506 | orchestrator | 2026-03-19 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:29.186949 | orchestrator | 2026-03-19 00:57:29 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:29.189512 | orchestrator | 2026-03-19 00:57:29 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:29.191426 | orchestrator | 2026-03-19 00:57:29 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:29.191493 | orchestrator | 2026-03-19 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:32.237305 | orchestrator | 2026-03-19 00:57:32 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:32.238613 | orchestrator | 2026-03-19 00:57:32 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:32.240073 | orchestrator | 2026-03-19 00:57:32 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:32.240098 | orchestrator | 2026-03-19 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:35.281853 | orchestrator | 2026-03-19 00:57:35 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:35.284255 | orchestrator | 2026-03-19 00:57:35 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:35.286791 | orchestrator | 2026-03-19 00:57:35 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:35.286998 | orchestrator | 2026-03-19 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:38.329907 | orchestrator | 2026-03-19 00:57:38 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:38.332288 | orchestrator | 2026-03-19 00:57:38 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:38.335039 | orchestrator | 2026-03-19 00:57:38 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:38.335118 | orchestrator | 2026-03-19 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:41.366697 | orchestrator | 2026-03-19 00:57:41 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:41.366776 | orchestrator | 2026-03-19 00:57:41 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:41.366786 | orchestrator | 2026-03-19 00:57:41 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:41.366801 | orchestrator | 2026-03-19 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:44.403056 | orchestrator | 2026-03-19 00:57:44 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:44.409966 | orchestrator | 2026-03-19 00:57:44 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:44.414453 | orchestrator | 2026-03-19 00:57:44 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:44.414617 | orchestrator | 2026-03-19 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:47.456314 | orchestrator | 2026-03-19 00:57:47 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:47.459103 | orchestrator | 2026-03-19 00:57:47 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:47.460635 | orchestrator | 2026-03-19 00:57:47 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:47.460980 | orchestrator | 2026-03-19 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:50.501007 | orchestrator | 2026-03-19 00:57:50 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:50.504643 | orchestrator | 2026-03-19 00:57:50 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:50.507089 | orchestrator | 2026-03-19 00:57:50 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:50.507165 | orchestrator | 2026-03-19 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:53.546944 | orchestrator | 2026-03-19 00:57:53 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:53.548870 | orchestrator | 2026-03-19 00:57:53 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:53.550763 | orchestrator | 2026-03-19 00:57:53 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:53.550865 | orchestrator | 2026-03-19 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:56.604958 | orchestrator | 2026-03-19 00:57:56 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:56.606345 | orchestrator | 2026-03-19 00:57:56 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:56.607911 | orchestrator | 2026-03-19 00:57:56 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:56.607971 | orchestrator | 2026-03-19 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:57:59.655437 | orchestrator | 2026-03-19 00:57:59 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:57:59.657010 | orchestrator | 2026-03-19 00:57:59 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:57:59.658673 | orchestrator | 2026-03-19 00:57:59 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:57:59.658740 | orchestrator | 2026-03-19 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:02.703061 | orchestrator | 2026-03-19 00:58:02 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:02.706690 | orchestrator | 2026-03-19 00:58:02 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:02.708744 | orchestrator | 2026-03-19 00:58:02 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:58:02.708789 | orchestrator | 2026-03-19 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:05.752662 | orchestrator | 2026-03-19 00:58:05 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:05.756272 | orchestrator | 2026-03-19 00:58:05 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:05.758739 | orchestrator | 2026-03-19 00:58:05 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:58:05.758826 | orchestrator | 2026-03-19 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:08.799693 | orchestrator | 2026-03-19 00:58:08 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:08.799785 | orchestrator | 2026-03-19 00:58:08 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:08.801128 | orchestrator | 2026-03-19 00:58:08 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:58:08.801260 | orchestrator | 2026-03-19 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:11.845908 | orchestrator | 2026-03-19 00:58:11 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:11.847190 | orchestrator | 2026-03-19 00:58:11 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:11.849052 | orchestrator | 2026-03-19 00:58:11 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:58:11.849093 | orchestrator | 2026-03-19 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:14.889893 | orchestrator | 2026-03-19 00:58:14 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:14.891507 | orchestrator | 2026-03-19 00:58:14 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:14.893903 | orchestrator | 2026-03-19 00:58:14 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:58:14.893932 | orchestrator | 2026-03-19 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:17.939379 | orchestrator | 2026-03-19 00:58:17 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:17.943350 | orchestrator | 2026-03-19 00:58:17 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:17.945404 | orchestrator | 2026-03-19 00:58:17 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:58:17.945473 | orchestrator | 2026-03-19 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:20.988218 | orchestrator | 2026-03-19 00:58:20 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:20.988834 | orchestrator | 2026-03-19 00:58:20 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:20.989752 | orchestrator | 2026-03-19 00:58:20 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:58:20.989899 | orchestrator | 2026-03-19 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:24.032770 | orchestrator | 2026-03-19 00:58:24 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:24.035075 | orchestrator | 2026-03-19 00:58:24 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:24.036925 | orchestrator | 2026-03-19 00:58:24 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state STARTED 2026-03-19 00:58:24.037000 | orchestrator | 2026-03-19 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:27.073137 | orchestrator | 2026-03-19 00:58:27 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:27.075966 | orchestrator | 2026-03-19 00:58:27 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:27.079725 | orchestrator | 2026-03-19 00:58:27 | INFO  | Task 27eecb6e-a7a6-4a0e-bbd6-8cd9e7635605 is in state SUCCESS 2026-03-19 00:58:27.080512 | orchestrator | 2026-03-19 00:58:27.080554 | orchestrator | 2026-03-19 00:58:27.080561 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 00:58:27.080566 | orchestrator | 2026-03-19 00:58:27.080570 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 00:58:27.080575 | orchestrator | Thursday 19 March 2026 00:56:53 +0000 (0:00:00.227) 0:00:00.227 ******** 2026-03-19 00:58:27.080579 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:58:27.080584 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:58:27.080590 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:58:27.080596 | orchestrator | 2026-03-19 00:58:27.080630 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 00:58:27.080635 | orchestrator | Thursday 19 March 2026 00:56:53 +0000 (0:00:00.236) 0:00:00.463 ******** 2026-03-19 00:58:27.080638 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-19 00:58:27.080643 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-19 00:58:27.080647 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-19 00:58:27.080650 | orchestrator | 2026-03-19 00:58:27.080654 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-19 00:58:27.080658 | orchestrator | 2026-03-19 00:58:27.080662 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-19 00:58:27.080666 | orchestrator | Thursday 19 March 2026 00:56:54 +0000 (0:00:00.294) 0:00:00.758 ******** 2026-03-19 00:58:27.080670 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:58:27.080675 | orchestrator | 2026-03-19 00:58:27.080727 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-19 00:58:27.080732 | orchestrator | Thursday 19 March 2026 00:56:54 +0000 (0:00:00.449) 0:00:01.207 ******** 2026-03-19 00:58:27.080740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 00:58:27.080779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 00:58:27.080801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 00:58:27.080808 | orchestrator | 2026-03-19 00:58:27.080814 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-19 00:58:27.080825 | orchestrator | Thursday 19 March 2026 00:56:56 +0000 (0:00:01.485) 0:00:02.693 ******** 2026-03-19 00:58:27.080893 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:58:27.080902 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:58:27.080915 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:58:27.080920 | orchestrator | 2026-03-19 00:58:27.080926 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-19 00:58:27.080932 | orchestrator | Thursday 19 March 2026 00:56:56 +0000 (0:00:00.257) 0:00:02.950 ******** 2026-03-19 00:58:27.080938 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-19 00:58:27.081069 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-19 00:58:27.081081 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-19 00:58:27.081088 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-19 00:58:27.081094 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-19 00:58:27.081100 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-19 00:58:27.081106 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-19 00:58:27.081112 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-19 00:58:27.081118 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-19 00:58:27.081125 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-19 00:58:27.081130 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-19 00:58:27.081136 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-19 00:58:27.081142 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-19 00:58:27.081148 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-19 00:58:27.081154 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-19 00:58:27.081160 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-19 00:58:27.081166 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-19 00:58:27.081172 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-19 00:58:27.081178 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-19 00:58:27.081185 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-19 00:58:27.081191 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-19 00:58:27.081198 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-19 00:58:27.081204 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-19 00:58:27.081210 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-19 00:58:27.081217 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-19 00:58:27.081225 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-19 00:58:27.081231 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-19 00:58:27.081238 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-19 00:58:27.081245 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-19 00:58:27.081252 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-19 00:58:27.081264 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-19 00:58:27.081268 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-19 00:58:27.081272 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-19 00:58:27.081277 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-19 00:58:27.081281 | orchestrator | 2026-03-19 00:58:27.081291 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 00:58:27.081295 | orchestrator | Thursday 19 March 2026 00:56:57 +0000 (0:00:00.631) 0:00:03.582 ******** 2026-03-19 00:58:27.081299 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:58:27.081303 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:58:27.081307 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:58:27.081310 | orchestrator | 2026-03-19 00:58:27.081314 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 00:58:27.081318 | orchestrator | Thursday 19 March 2026 00:56:57 +0000 (0:00:00.345) 0:00:03.927 ******** 2026-03-19 00:58:27.081322 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081326 | orchestrator | 2026-03-19 00:58:27.081336 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 00:58:27.081340 | orchestrator | Thursday 19 March 2026 00:56:57 +0000 (0:00:00.107) 0:00:04.035 ******** 2026-03-19 00:58:27.081344 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081348 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:58:27.081351 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:58:27.081355 | orchestrator | 2026-03-19 00:58:27.081359 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 00:58:27.081363 | orchestrator | Thursday 19 March 2026 00:56:57 +0000 (0:00:00.230) 0:00:04.265 ******** 2026-03-19 00:58:27.081366 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:58:27.081370 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:58:27.081374 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:58:27.081378 | orchestrator | 2026-03-19 00:58:27.081381 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 00:58:27.081385 | orchestrator | Thursday 19 March 2026 00:56:58 +0000 (0:00:00.249) 0:00:04.515 ******** 2026-03-19 00:58:27.081389 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081393 | orchestrator | 2026-03-19 00:58:27.081396 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 00:58:27.081400 | orchestrator | Thursday 19 March 2026 00:56:58 +0000 (0:00:00.117) 0:00:04.633 ******** 2026-03-19 00:58:27.081428 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081433 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:58:27.081437 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:58:27.081441 | orchestrator | 2026-03-19 00:58:27.081444 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 00:58:27.081448 | orchestrator | Thursday 19 March 2026 00:56:58 +0000 (0:00:00.342) 0:00:04.975 ******** 2026-03-19 00:58:27.081452 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:58:27.081455 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:58:27.081459 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:58:27.081463 | orchestrator | 2026-03-19 00:58:27.081467 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 00:58:27.081470 | orchestrator | Thursday 19 March 2026 00:56:58 +0000 (0:00:00.255) 0:00:05.231 ******** 2026-03-19 00:58:27.081474 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081482 | orchestrator | 2026-03-19 00:58:27.081486 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 00:58:27.081490 | orchestrator | Thursday 19 March 2026 00:56:58 +0000 (0:00:00.094) 0:00:05.325 ******** 2026-03-19 00:58:27.081493 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081497 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:58:27.081501 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:58:27.081505 | orchestrator | 2026-03-19 00:58:27.081508 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 00:58:27.081512 | orchestrator | Thursday 19 March 2026 00:56:59 +0000 (0:00:00.245) 0:00:05.571 ******** 2026-03-19 00:58:27.081516 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:58:27.081520 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:58:27.081523 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:58:27.081527 | orchestrator | 2026-03-19 00:58:27.081531 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 00:58:27.081534 | orchestrator | Thursday 19 March 2026 00:56:59 +0000 (0:00:00.241) 0:00:05.813 ******** 2026-03-19 00:58:27.081538 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081542 | orchestrator | 2026-03-19 00:58:27.081546 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 00:58:27.081549 | orchestrator | Thursday 19 March 2026 00:56:59 +0000 (0:00:00.094) 0:00:05.908 ******** 2026-03-19 00:58:27.081553 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081557 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:58:27.081561 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:58:27.081564 | orchestrator | 2026-03-19 00:58:27.081568 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 00:58:27.081572 | orchestrator | Thursday 19 March 2026 00:56:59 +0000 (0:00:00.338) 0:00:06.247 ******** 2026-03-19 00:58:27.081576 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:58:27.081579 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:58:27.081583 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:58:27.081587 | orchestrator | 2026-03-19 00:58:27.081590 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 00:58:27.081594 | orchestrator | Thursday 19 March 2026 00:57:00 +0000 (0:00:00.277) 0:00:06.524 ******** 2026-03-19 00:58:27.081598 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081601 | orchestrator | 2026-03-19 00:58:27.081605 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 00:58:27.081609 | orchestrator | Thursday 19 March 2026 00:57:00 +0000 (0:00:00.113) 0:00:06.637 ******** 2026-03-19 00:58:27.081613 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081616 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:58:27.081620 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:58:27.081624 | orchestrator | 2026-03-19 00:58:27.081628 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 00:58:27.081632 | orchestrator | Thursday 19 March 2026 00:57:00 +0000 (0:00:00.271) 0:00:06.909 ******** 2026-03-19 00:58:27.081635 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:58:27.081639 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:58:27.081643 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:58:27.081647 | orchestrator | 2026-03-19 00:58:27.081650 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 00:58:27.081654 | orchestrator | Thursday 19 March 2026 00:57:00 +0000 (0:00:00.280) 0:00:07.189 ******** 2026-03-19 00:58:27.081658 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081662 | orchestrator | 2026-03-19 00:58:27.081669 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 00:58:27.081673 | orchestrator | Thursday 19 March 2026 00:57:00 +0000 (0:00:00.291) 0:00:07.481 ******** 2026-03-19 00:58:27.081676 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081680 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:58:27.081684 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:58:27.081688 | orchestrator | 2026-03-19 00:58:27.081693 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 00:58:27.081705 | orchestrator | Thursday 19 March 2026 00:57:01 +0000 (0:00:00.333) 0:00:07.814 ******** 2026-03-19 00:58:27.081710 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:58:27.081714 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:58:27.081718 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:58:27.081723 | orchestrator | 2026-03-19 00:58:27.081727 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 00:58:27.081732 | orchestrator | Thursday 19 March 2026 00:57:01 +0000 (0:00:00.316) 0:00:08.131 ******** 2026-03-19 00:58:27.081736 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081740 | orchestrator | 2026-03-19 00:58:27.081744 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 00:58:27.081749 | orchestrator | Thursday 19 March 2026 00:57:01 +0000 (0:00:00.151) 0:00:08.283 ******** 2026-03-19 00:58:27.081753 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081757 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:58:27.081762 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:58:27.081766 | orchestrator | 2026-03-19 00:58:27.081770 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 00:58:27.081774 | orchestrator | Thursday 19 March 2026 00:57:02 +0000 (0:00:00.272) 0:00:08.555 ******** 2026-03-19 00:58:27.081778 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:58:27.081783 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:58:27.081787 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:58:27.081791 | orchestrator | 2026-03-19 00:58:27.081796 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 00:58:27.081800 | orchestrator | Thursday 19 March 2026 00:57:02 +0000 (0:00:00.536) 0:00:09.091 ******** 2026-03-19 00:58:27.081804 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081808 | orchestrator | 2026-03-19 00:58:27.081813 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 00:58:27.081817 | orchestrator | Thursday 19 March 2026 00:57:02 +0000 (0:00:00.113) 0:00:09.205 ******** 2026-03-19 00:58:27.081821 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081826 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:58:27.081830 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:58:27.081835 | orchestrator | 2026-03-19 00:58:27.081839 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 00:58:27.081843 | orchestrator | Thursday 19 March 2026 00:57:03 +0000 (0:00:00.395) 0:00:09.601 ******** 2026-03-19 00:58:27.081847 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:58:27.081852 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:58:27.081856 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:58:27.081860 | orchestrator | 2026-03-19 00:58:27.081865 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 00:58:27.081869 | orchestrator | Thursday 19 March 2026 00:57:03 +0000 (0:00:00.302) 0:00:09.903 ******** 2026-03-19 00:58:27.081873 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081878 | orchestrator | 2026-03-19 00:58:27.081882 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 00:58:27.081886 | orchestrator | Thursday 19 March 2026 00:57:03 +0000 (0:00:00.104) 0:00:10.008 ******** 2026-03-19 00:58:27.081891 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081896 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:58:27.081900 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:58:27.081904 | orchestrator | 2026-03-19 00:58:27.081909 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 00:58:27.081913 | orchestrator | Thursday 19 March 2026 00:57:03 +0000 (0:00:00.318) 0:00:10.326 ******** 2026-03-19 00:58:27.081918 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:58:27.081922 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:58:27.081927 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:58:27.081931 | orchestrator | 2026-03-19 00:58:27.081936 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 00:58:27.081948 | orchestrator | Thursday 19 March 2026 00:57:04 +0000 (0:00:00.632) 0:00:10.959 ******** 2026-03-19 00:58:27.081952 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081957 | orchestrator | 2026-03-19 00:58:27.081962 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 00:58:27.081965 | orchestrator | Thursday 19 March 2026 00:57:04 +0000 (0:00:00.150) 0:00:11.110 ******** 2026-03-19 00:58:27.081969 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.081973 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:58:27.081977 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:58:27.081980 | orchestrator | 2026-03-19 00:58:27.081984 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-19 00:58:27.081988 | orchestrator | Thursday 19 March 2026 00:57:04 +0000 (0:00:00.300) 0:00:11.410 ******** 2026-03-19 00:58:27.081992 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:58:27.081995 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:58:27.081999 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:58:27.082003 | orchestrator | 2026-03-19 00:58:27.082007 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-19 00:58:27.082010 | orchestrator | Thursday 19 March 2026 00:57:06 +0000 (0:00:01.809) 0:00:13.220 ******** 2026-03-19 00:58:27.082051 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-19 00:58:27.082058 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-19 00:58:27.082064 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-19 00:58:27.082069 | orchestrator | 2026-03-19 00:58:27.082073 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-19 00:58:27.082077 | orchestrator | Thursday 19 March 2026 00:57:09 +0000 (0:00:02.552) 0:00:15.773 ******** 2026-03-19 00:58:27.082084 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-19 00:58:27.082089 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-19 00:58:27.082093 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-19 00:58:27.082096 | orchestrator | 2026-03-19 00:58:27.082100 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-19 00:58:27.082107 | orchestrator | Thursday 19 March 2026 00:57:12 +0000 (0:00:03.234) 0:00:19.007 ******** 2026-03-19 00:58:27.082111 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-19 00:58:27.082115 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-19 00:58:27.082119 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-19 00:58:27.082123 | orchestrator | 2026-03-19 00:58:27.082126 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-19 00:58:27.082130 | orchestrator | Thursday 19 March 2026 00:57:13 +0000 (0:00:01.351) 0:00:20.359 ******** 2026-03-19 00:58:27.082134 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.082138 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:58:27.082142 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:58:27.082145 | orchestrator | 2026-03-19 00:58:27.082149 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-19 00:58:27.082153 | orchestrator | Thursday 19 March 2026 00:57:14 +0000 (0:00:00.270) 0:00:20.630 ******** 2026-03-19 00:58:27.082157 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.082174 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:58:27.082178 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:58:27.082181 | orchestrator | 2026-03-19 00:58:27.082185 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-19 00:58:27.082193 | orchestrator | Thursday 19 March 2026 00:57:14 +0000 (0:00:00.239) 0:00:20.870 ******** 2026-03-19 00:58:27.082197 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:58:27.082201 | orchestrator | 2026-03-19 00:58:27.082204 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-19 00:58:27.082210 | orchestrator | Thursday 19 March 2026 00:57:15 +0000 (0:00:00.737) 0:00:21.607 ******** 2026-03-19 00:58:27.082219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 00:58:27.082237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 00:58:27.082258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 00:58:27.082265 | orchestrator | 2026-03-19 00:58:27.082271 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-19 00:58:27.082277 | orchestrator | Thursday 19 March 2026 00:57:16 +0000 (0:00:01.260) 0:00:22.868 ******** 2026-03-19 00:58:27.082290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 00:58:27.082311 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.082325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 00:58:27.082331 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:58:27.082338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 00:58:27.082348 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:58:27.082354 | orchestrator | 2026-03-19 00:58:27.082359 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-19 00:58:27.082365 | orchestrator | Thursday 19 March 2026 00:57:17 +0000 (0:00:00.902) 0:00:23.771 ******** 2026-03-19 00:58:27.082378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 00:58:27.082389 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.082395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 00:58:27.082401 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:58:27.082435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 00:58:27.082447 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:58:27.082452 | orchestrator | 2026-03-19 00:58:27.082458 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-19 00:58:27.082463 | orchestrator | Thursday 19 March 2026 00:57:18 +0000 (0:00:01.030) 0:00:24.801 ******** 2026-03-19 00:58:27.082469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 00:58:27.082484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 00:58:27.082495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 00:58:27.082501 | orchestrator | 2026-03-19 00:58:27.082507 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-19 00:58:27.082513 | orchestrator | Thursday 19 March 2026 00:57:19 +0000 (0:00:01.216) 0:00:26.017 ******** 2026-03-19 00:58:27.082519 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:58:27.082525 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:58:27.082531 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:58:27.082540 | orchestrator | 2026-03-19 00:58:27.082545 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-19 00:58:27.082551 | orchestrator | Thursday 19 March 2026 00:57:19 +0000 (0:00:00.310) 0:00:26.328 ******** 2026-03-19 00:58:27.082557 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:58:27.082569 | orchestrator | 2026-03-19 00:58:27.082575 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-19 00:58:27.082584 | orchestrator | Thursday 19 March 2026 00:57:20 +0000 (0:00:01.028) 0:00:27.356 ******** 2026-03-19 00:58:27.082590 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:58:27.082596 | orchestrator | 2026-03-19 00:58:27.082602 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-19 00:58:27.082608 | orchestrator | Thursday 19 March 2026 00:57:22 +0000 (0:00:01.851) 0:00:29.207 ******** 2026-03-19 00:58:27.082612 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:58:27.082616 | orchestrator | 2026-03-19 00:58:27.082619 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-19 00:58:27.082623 | orchestrator | Thursday 19 March 2026 00:57:24 +0000 (0:00:01.904) 0:00:31.112 ******** 2026-03-19 00:58:27.082627 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:58:27.082631 | orchestrator | 2026-03-19 00:58:27.082635 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-19 00:58:27.082639 | orchestrator | Thursday 19 March 2026 00:57:39 +0000 (0:00:14.819) 0:00:45.931 ******** 2026-03-19 00:58:27.082642 | orchestrator | 2026-03-19 00:58:27.082646 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-19 00:58:27.082650 | orchestrator | Thursday 19 March 2026 00:57:39 +0000 (0:00:00.063) 0:00:45.995 ******** 2026-03-19 00:58:27.082654 | orchestrator | 2026-03-19 00:58:27.082657 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-19 00:58:27.082661 | orchestrator | Thursday 19 March 2026 00:57:39 +0000 (0:00:00.062) 0:00:46.058 ******** 2026-03-19 00:58:27.082665 | orchestrator | 2026-03-19 00:58:27.082669 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-19 00:58:27.082673 | orchestrator | Thursday 19 March 2026 00:57:39 +0000 (0:00:00.067) 0:00:46.125 ******** 2026-03-19 00:58:27.082676 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:58:27.082680 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:58:27.082684 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:58:27.082688 | orchestrator | 2026-03-19 00:58:27.082692 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:58:27.082695 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-19 00:58:27.082700 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-19 00:58:27.082704 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-19 00:58:27.082708 | orchestrator | 2026-03-19 00:58:27.082712 | orchestrator | 2026-03-19 00:58:27.082716 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:58:27.082720 | orchestrator | Thursday 19 March 2026 00:58:26 +0000 (0:00:46.415) 0:01:32.541 ******** 2026-03-19 00:58:27.082723 | orchestrator | =============================================================================== 2026-03-19 00:58:27.082727 | orchestrator | horizon : Restart horizon container ------------------------------------ 46.42s 2026-03-19 00:58:27.082731 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.82s 2026-03-19 00:58:27.082735 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.23s 2026-03-19 00:58:27.082738 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.55s 2026-03-19 00:58:27.082742 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 1.90s 2026-03-19 00:58:27.082746 | orchestrator | horizon : Creating Horizon database ------------------------------------- 1.85s 2026-03-19 00:58:27.082750 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.81s 2026-03-19 00:58:27.082753 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.49s 2026-03-19 00:58:27.082762 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.35s 2026-03-19 00:58:27.082765 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.26s 2026-03-19 00:58:27.082769 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.22s 2026-03-19 00:58:27.082773 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.03s 2026-03-19 00:58:27.082777 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.03s 2026-03-19 00:58:27.082781 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.90s 2026-03-19 00:58:27.082785 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2026-03-19 00:58:27.082788 | orchestrator | horizon : Update policy file name --------------------------------------- 0.63s 2026-03-19 00:58:27.082792 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2026-03-19 00:58:27.082796 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2026-03-19 00:58:27.082800 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.45s 2026-03-19 00:58:27.082803 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.40s 2026-03-19 00:58:27.082810 | orchestrator | 2026-03-19 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:30.125895 | orchestrator | 2026-03-19 00:58:30 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:30.128187 | orchestrator | 2026-03-19 00:58:30 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:30.128367 | orchestrator | 2026-03-19 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:33.182785 | orchestrator | 2026-03-19 00:58:33 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:33.184894 | orchestrator | 2026-03-19 00:58:33 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:33.184930 | orchestrator | 2026-03-19 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:36.237334 | orchestrator | 2026-03-19 00:58:36 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:36.239605 | orchestrator | 2026-03-19 00:58:36 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:36.239640 | orchestrator | 2026-03-19 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:39.284832 | orchestrator | 2026-03-19 00:58:39 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:39.286667 | orchestrator | 2026-03-19 00:58:39 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:39.286713 | orchestrator | 2026-03-19 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:42.332050 | orchestrator | 2026-03-19 00:58:42 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:42.334060 | orchestrator | 2026-03-19 00:58:42 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:42.334109 | orchestrator | 2026-03-19 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:45.378511 | orchestrator | 2026-03-19 00:58:45 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:45.379659 | orchestrator | 2026-03-19 00:58:45 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:45.379706 | orchestrator | 2026-03-19 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:48.425758 | orchestrator | 2026-03-19 00:58:48 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state STARTED 2026-03-19 00:58:48.427674 | orchestrator | 2026-03-19 00:58:48 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:48.427721 | orchestrator | 2026-03-19 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:51.477437 | orchestrator | 2026-03-19 00:58:51.477599 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-19 00:58:51.477616 | orchestrator | 2.16.14 2026-03-19 00:58:51.477625 | orchestrator | 2026-03-19 00:58:51.477631 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-19 00:58:51.477697 | orchestrator | 2026-03-19 00:58:51.477704 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 00:58:51.477710 | orchestrator | Thursday 19 March 2026 00:56:49 +0000 (0:00:00.496) 0:00:00.496 ******** 2026-03-19 00:58:51.477716 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4, testbed-node-5, testbed-node-3 2026-03-19 00:58:51.477724 | orchestrator | 2026-03-19 00:58:51.477729 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 00:58:51.477735 | orchestrator | Thursday 19 March 2026 00:56:50 +0000 (0:00:00.788) 0:00:01.285 ******** 2026-03-19 00:58:51.477741 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:58:51.477748 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:58:51.477754 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:58:51.477760 | orchestrator | 2026-03-19 00:58:51.477766 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 00:58:51.477934 | orchestrator | Thursday 19 March 2026 00:56:50 +0000 (0:00:00.929) 0:00:02.214 ******** 2026-03-19 00:58:51.477941 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:58:51.477946 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:58:51.477952 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:58:51.477957 | orchestrator | 2026-03-19 00:58:51.477964 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 00:58:51.477970 | orchestrator | Thursday 19 March 2026 00:56:51 +0000 (0:00:00.236) 0:00:02.450 ******** 2026-03-19 00:58:51.477976 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:58:51.477982 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:58:51.477989 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:58:51.477993 | orchestrator | 2026-03-19 00:58:51.477998 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 00:58:51.478002 | orchestrator | Thursday 19 March 2026 00:56:51 +0000 (0:00:00.660) 0:00:03.110 ******** 2026-03-19 00:58:51.478006 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:58:51.478010 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:58:51.478060 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:58:51.478068 | orchestrator | 2026-03-19 00:58:51.478074 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 00:58:51.478081 | orchestrator | Thursday 19 March 2026 00:56:52 +0000 (0:00:00.258) 0:00:03.369 ******** 2026-03-19 00:58:51.478167 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:58:51.478177 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:58:51.478183 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:58:51.478189 | orchestrator | 2026-03-19 00:58:51.478196 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 00:58:51.478202 | orchestrator | Thursday 19 March 2026 00:56:52 +0000 (0:00:00.309) 0:00:03.679 ******** 2026-03-19 00:58:51.478208 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:58:51.478214 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:58:51.478220 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:58:51.478225 | orchestrator | 2026-03-19 00:58:51.478231 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 00:58:51.478236 | orchestrator | Thursday 19 March 2026 00:56:52 +0000 (0:00:00.262) 0:00:03.941 ******** 2026-03-19 00:58:51.478469 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.478489 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.478551 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.478558 | orchestrator | 2026-03-19 00:58:51.478565 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 00:58:51.478575 | orchestrator | Thursday 19 March 2026 00:56:53 +0000 (0:00:00.390) 0:00:04.332 ******** 2026-03-19 00:58:51.478585 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:58:51.478591 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:58:51.478597 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:58:51.478603 | orchestrator | 2026-03-19 00:58:51.478610 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 00:58:51.478616 | orchestrator | Thursday 19 March 2026 00:56:53 +0000 (0:00:00.249) 0:00:04.581 ******** 2026-03-19 00:58:51.478623 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 00:58:51.478630 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 00:58:51.478637 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 00:58:51.478643 | orchestrator | 2026-03-19 00:58:51.478649 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 00:58:51.478656 | orchestrator | Thursday 19 March 2026 00:56:53 +0000 (0:00:00.577) 0:00:05.158 ******** 2026-03-19 00:58:51.478663 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:58:51.478669 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:58:51.478675 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:58:51.478680 | orchestrator | 2026-03-19 00:58:51.478686 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 00:58:51.478693 | orchestrator | Thursday 19 March 2026 00:56:54 +0000 (0:00:00.383) 0:00:05.542 ******** 2026-03-19 00:58:51.478700 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 00:58:51.478706 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 00:58:51.478712 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 00:58:51.478719 | orchestrator | 2026-03-19 00:58:51.478725 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 00:58:51.478732 | orchestrator | Thursday 19 March 2026 00:56:57 +0000 (0:00:02.891) 0:00:08.433 ******** 2026-03-19 00:58:51.478738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 00:58:51.478745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 00:58:51.478752 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 00:58:51.478759 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.478764 | orchestrator | 2026-03-19 00:58:51.478812 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 00:58:51.478817 | orchestrator | Thursday 19 March 2026 00:56:57 +0000 (0:00:00.378) 0:00:08.812 ******** 2026-03-19 00:58:51.478822 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.478830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.478836 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.478842 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.478851 | orchestrator | 2026-03-19 00:58:51.478858 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 00:58:51.478865 | orchestrator | Thursday 19 March 2026 00:56:58 +0000 (0:00:00.652) 0:00:09.465 ******** 2026-03-19 00:58:51.478884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.478901 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.478908 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.478914 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.478920 | orchestrator | 2026-03-19 00:58:51.478927 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 00:58:51.478932 | orchestrator | Thursday 19 March 2026 00:56:58 +0000 (0:00:00.126) 0:00:09.591 ******** 2026-03-19 00:58:51.478938 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f8c0d3d33245', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 00:56:55.186914', 'end': '2026-03-19 00:56:55.210997', 'delta': '0:00:00.024083', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f8c0d3d33245'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 00:58:51.478945 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f3060da46e34', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 00:56:56.118752', 'end': '2026-03-19 00:56:56.159164', 'delta': '0:00:00.040412', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f3060da46e34'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 00:58:51.478974 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0dffc399b485', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 00:56:57.039460', 'end': '2026-03-19 00:56:57.081829', 'delta': '0:00:00.042369', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0dffc399b485'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 00:58:51.478983 | orchestrator | 2026-03-19 00:58:51.478996 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 00:58:51.479001 | orchestrator | Thursday 19 March 2026 00:56:58 +0000 (0:00:00.268) 0:00:09.860 ******** 2026-03-19 00:58:51.479008 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:58:51.479014 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:58:51.479021 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:58:51.479026 | orchestrator | 2026-03-19 00:58:51.479032 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 00:58:51.479038 | orchestrator | Thursday 19 March 2026 00:56:59 +0000 (0:00:00.399) 0:00:10.259 ******** 2026-03-19 00:58:51.479044 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-19 00:58:51.479050 | orchestrator | 2026-03-19 00:58:51.479057 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 00:58:51.479063 | orchestrator | Thursday 19 March 2026 00:57:00 +0000 (0:00:01.628) 0:00:11.887 ******** 2026-03-19 00:58:51.479069 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.479072 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.479076 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.479080 | orchestrator | 2026-03-19 00:58:51.479084 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 00:58:51.479087 | orchestrator | Thursday 19 March 2026 00:57:00 +0000 (0:00:00.313) 0:00:12.201 ******** 2026-03-19 00:58:51.479095 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.479099 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.479103 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.479107 | orchestrator | 2026-03-19 00:58:51.479111 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 00:58:51.479116 | orchestrator | Thursday 19 March 2026 00:57:01 +0000 (0:00:00.438) 0:00:12.639 ******** 2026-03-19 00:58:51.479120 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.479124 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.479128 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.479133 | orchestrator | 2026-03-19 00:58:51.479137 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 00:58:51.479142 | orchestrator | Thursday 19 March 2026 00:57:01 +0000 (0:00:00.510) 0:00:13.150 ******** 2026-03-19 00:58:51.479146 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:58:51.479150 | orchestrator | 2026-03-19 00:58:51.479155 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 00:58:51.479159 | orchestrator | Thursday 19 March 2026 00:57:02 +0000 (0:00:00.116) 0:00:13.267 ******** 2026-03-19 00:58:51.479163 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.479167 | orchestrator | 2026-03-19 00:58:51.479172 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 00:58:51.479176 | orchestrator | Thursday 19 March 2026 00:57:02 +0000 (0:00:00.204) 0:00:13.471 ******** 2026-03-19 00:58:51.479180 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.479184 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.479188 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.479193 | orchestrator | 2026-03-19 00:58:51.479198 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 00:58:51.479205 | orchestrator | Thursday 19 March 2026 00:57:02 +0000 (0:00:00.271) 0:00:13.743 ******** 2026-03-19 00:58:51.479214 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.479220 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.479226 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.479232 | orchestrator | 2026-03-19 00:58:51.479238 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 00:58:51.479244 | orchestrator | Thursday 19 March 2026 00:57:02 +0000 (0:00:00.303) 0:00:14.046 ******** 2026-03-19 00:58:51.479250 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.479256 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.479262 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.479269 | orchestrator | 2026-03-19 00:58:51.479281 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 00:58:51.479288 | orchestrator | Thursday 19 March 2026 00:57:03 +0000 (0:00:00.516) 0:00:14.562 ******** 2026-03-19 00:58:51.479293 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.479299 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.479304 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.479308 | orchestrator | 2026-03-19 00:58:51.479312 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 00:58:51.479317 | orchestrator | Thursday 19 March 2026 00:57:03 +0000 (0:00:00.310) 0:00:14.873 ******** 2026-03-19 00:58:51.479321 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.479325 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.479329 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.479333 | orchestrator | 2026-03-19 00:58:51.479338 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 00:58:51.479342 | orchestrator | Thursday 19 March 2026 00:57:03 +0000 (0:00:00.303) 0:00:15.176 ******** 2026-03-19 00:58:51.479347 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.479351 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.479379 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.479416 | orchestrator | 2026-03-19 00:58:51.479422 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 00:58:51.479428 | orchestrator | Thursday 19 March 2026 00:57:04 +0000 (0:00:00.354) 0:00:15.530 ******** 2026-03-19 00:58:51.479435 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.479441 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.479447 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.479453 | orchestrator | 2026-03-19 00:58:51.479457 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 00:58:51.479461 | orchestrator | Thursday 19 March 2026 00:57:04 +0000 (0:00:00.511) 0:00:16.042 ******** 2026-03-19 00:58:51.479467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0-osd--block--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0', 'dm-uuid-LVM-wGHRZdQDjg7vWurNEdhtc2UbI834lJn3dmVIrhekVpy3FO1O1xKqGaZmVIfMMr3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d672a78a--4132--5655--a0fe--bae0f8eb714c-osd--block--d672a78a--4132--5655--a0fe--bae0f8eb714c', 'dm-uuid-LVM-YJ5R6ssJBZnSwomj4KA118jQLucuu9g7fKyyCMhU750XfMum9yqZRg037CQJJiqS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part1', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part14', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part15', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part16', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:58:51.479577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0-osd--block--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2f61p2-jEyl-RpgU-sj5H-HS7W-v4rc-bkHIfD', 'scsi-0QEMU_QEMU_HARDDISK_cc7e233c-8cac-4df4-a011-c93cbddae1f1', 'scsi-SQEMU_QEMU_HARDDISK_cc7e233c-8cac-4df4-a011-c93cbddae1f1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:58:51.479597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d672a78a--4132--5655--a0fe--bae0f8eb714c-osd--block--d672a78a--4132--5655--a0fe--bae0f8eb714c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i3Cm88-eUVQ-T5g2-dPBI-tgHR-J0r6-11VZ1M', 'scsi-0QEMU_QEMU_HARDDISK_184b1ce1-ca15-4595-9678-ac68bfb03600', 'scsi-SQEMU_QEMU_HARDDISK_184b1ce1-ca15-4595-9678-ac68bfb03600'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:58:51.479602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5727df1-f3c7-4916-bc14-eaaddd40c7b3', 'scsi-SQEMU_QEMU_HARDDISK_d5727df1-f3c7-4916-bc14-eaaddd40c7b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:58:51.479611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:58:51.479615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c9339aa0--dcb3--5462--b16c--1d446efe678c-osd--block--c9339aa0--dcb3--5462--b16c--1d446efe678c', 'dm-uuid-LVM-7N41ZUFIMAXsQSUepdaXTlYgVduEAh0mYbywt0PbMF6rvfnHGFOKoqx1SYb7yfJz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0813f2fe--0b5e--5f32--866c--c0f68041cbc1-osd--block--0813f2fe--0b5e--5f32--866c--c0f68041cbc1', 'dm-uuid-LVM-dCofXil7JsY0aXuuqmsFXceNZQjGuIC9lL6jKguWcjVZBueY2muhAfprIfKqF9se'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479638 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.479654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7952abd--f19d--5f54--b846--7c46d615b8fb-osd--block--f7952abd--f19d--5f54--b846--7c46d615b8fb', 'dm-uuid-LVM-kSU7NOpZdrx1DM0VxQW2rlgZLxiojUbqfvtvBF0d8sWGc9vxnyKtJ8R9Cw6mmxfP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--056512d9--3a02--5302--afc2--fa0158449af3-osd--block--056512d9--3a02--5302--afc2--fa0158449af3', 'dm-uuid-LVM-XJS6QCxb3Z3bSJ0LVsY39xUM9q1hATkVetyxEGek35uW73tkjXLoTbJvAfnRxCRU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part1', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part14', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part15', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part16', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:58:51.479715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c9339aa0--dcb3--5462--b16c--1d446efe678c-osd--block--c9339aa0--dcb3--5462--b16c--1d446efe678c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nvF6bc-JV0t-rctN-oq69-66zh-uec0-1pLf1I', 'scsi-0QEMU_QEMU_HARDDISK_4a32226a-f2aa-4275-90ab-4bb1f7d2ca8f', 'scsi-SQEMU_QEMU_HARDDISK_4a32226a-f2aa-4275-90ab-4bb1f7d2ca8f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:58:51.479726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0813f2fe--0b5e--5f32--866c--c0f68041cbc1-osd--block--0813f2fe--0b5e--5f32--866c--c0f68041cbc1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UyvJR0-eWJa-VJQz-wPxK-2odC-cvUy-VOQer1', 'scsi-0QEMU_QEMU_HARDDISK_29b2bcaa-94cd-4bc7-8ad5-d8dbf0337361', 'scsi-SQEMU_QEMU_HARDDISK_29b2bcaa-94cd-4bc7-8ad5-d8dbf0337361'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': 2026-03-19 00:58:51 | INFO  | Task e78e0135-6006-4675-8051-2057d25b01f8 is in state SUCCESS 2026-03-19 00:58:51.479742 | orchestrator | {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:58:51.479747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c44a5407-fe6a-4dde-8ed4-2e9072e3ed0d', 'scsi-SQEMU_QEMU_HARDDISK_c44a5407-fe6a-4dde-8ed4-2e9072e3ed0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:58:51.479757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:58:51.479768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479772 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.479776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 00:58:51.479794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part1', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part14', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part15', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part16', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:58:51.479803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f7952abd--f19d--5f54--b846--7c46d615b8fb-osd--block--f7952abd--f19d--5f54--b846--7c46d615b8fb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b82v9G-2Ska-2RTK-iDfN-Mq85-FRiq-DBlpZs', 'scsi-0QEMU_QEMU_HARDDISK_2e1c1462-5959-44f4-a623-e25e33d313c5', 'scsi-SQEMU_QEMU_HARDDISK_2e1c1462-5959-44f4-a623-e25e33d313c5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:58:51.479807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--056512d9--3a02--5302--afc2--fa0158449af3-osd--block--056512d9--3a02--5302--afc2--fa0158449af3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-52XQY0-ueRj-IBB7-FKHA-4Vnm-xluU-ldZA0L', 'scsi-0QEMU_QEMU_HARDDISK_28009f26-7505-45c2-833e-d396e3f8b400', 'scsi-SQEMU_QEMU_HARDDISK_28009f26-7505-45c2-833e-d396e3f8b400'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:58:51.479811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4aaa0c2-0099-489f-8e98-802ea2f51c85', 'scsi-SQEMU_QEMU_HARDDISK_e4aaa0c2-0099-489f-8e98-802ea2f51c85'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:58:51.479819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 00:58:51.479823 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.479827 | orchestrator | 2026-03-19 00:58:51.479831 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 00:58:51.479835 | orchestrator | Thursday 19 March 2026 00:57:05 +0000 (0:00:00.664) 0:00:16.706 ******** 2026-03-19 00:58:51.479839 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0-osd--block--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0', 'dm-uuid-LVM-wGHRZdQDjg7vWurNEdhtc2UbI834lJn3dmVIrhekVpy3FO1O1xKqGaZmVIfMMr3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479852 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d672a78a--4132--5655--a0fe--bae0f8eb714c-osd--block--d672a78a--4132--5655--a0fe--bae0f8eb714c', 'dm-uuid-LVM-YJ5R6ssJBZnSwomj4KA118jQLucuu9g7fKyyCMhU750XfMum9yqZRg037CQJJiqS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479856 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479860 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479864 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479871 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479875 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479885 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479889 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479893 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479901 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part1', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part14', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part15', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part16', 'scsi-SQEMU_QEMU_HARDDISK_1685bc7d-e4e0-4b87-bbb4-7dc843c2418d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479913 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c9339aa0--dcb3--5462--b16c--1d446efe678c-osd--block--c9339aa0--dcb3--5462--b16c--1d446efe678c', 'dm-uuid-LVM-7N41ZUFIMAXsQSUepdaXTlYgVduEAh0mYbywt0PbMF6rvfnHGFOKoqx1SYb7yfJz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479922 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0-osd--block--24d614e2--ec6e--5ed2--9057--307e4a3cb0c0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2f61p2-jEyl-RpgU-sj5H-HS7W-v4rc-bkHIfD', 'scsi-0QEMU_QEMU_HARDDISK_cc7e233c-8cac-4df4-a011-c93cbddae1f1', 'scsi-SQEMU_QEMU_HARDDISK_cc7e233c-8cac-4df4-a011-c93cbddae1f1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479932 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d672a78a--4132--5655--a0fe--bae0f8eb714c-osd--block--d672a78a--4132--5655--a0fe--bae0f8eb714c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i3Cm88-eUVQ-T5g2-dPBI-tgHR-J0r6-11VZ1M', 'scsi-0QEMU_QEMU_HARDDISK_184b1ce1-ca15-4595-9678-ac68bfb03600', 'scsi-SQEMU_QEMU_HARDDISK_184b1ce1-ca15-4595-9678-ac68bfb03600'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479943 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0813f2fe--0b5e--5f32--866c--c0f68041cbc1-osd--block--0813f2fe--0b5e--5f32--866c--c0f68041cbc1', 'dm-uuid-LVM-dCofXil7JsY0aXuuqmsFXceNZQjGuIC9lL6jKguWcjVZBueY2muhAfprIfKqF9se'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479967 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5727df1-f3c7-4916-bc14-eaaddd40c7b3', 'scsi-SQEMU_QEMU_HARDDISK_d5727df1-f3c7-4916-bc14-eaaddd40c7b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479978 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479983 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.479987 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479991 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.479995 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480003 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480010 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480014 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480019 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7952abd--f19d--5f54--b846--7c46d615b8fb-osd--block--f7952abd--f19d--5f54--b846--7c46d615b8fb', 'dm-uuid-LVM-kSU7NOpZdrx1DM0VxQW2rlgZLxiojUbqfvtvBF0d8sWGc9vxnyKtJ8R9Cw6mmxfP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480023 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480090 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--056512d9--3a02--5302--afc2--fa0158449af3-osd--block--056512d9--3a02--5302--afc2--fa0158449af3', 'dm-uuid-LVM-XJS6QCxb3Z3bSJ0LVsY39xUM9q1hATkVetyxEGek35uW73tkjXLoTbJvAfnRxCRU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480118 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480130 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480142 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part1', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part14', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part15', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part16', 'scsi-SQEMU_QEMU_HARDDISK_ae8b1f95-6d42-4cca-804f-b3321e20a38b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480150 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480160 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c9339aa0--dcb3--5462--b16c--1d446efe678c-osd--block--c9339aa0--dcb3--5462--b16c--1d446efe678c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nvF6bc-JV0t-rctN-oq69-66zh-uec0-1pLf1I', 'scsi-0QEMU_QEMU_HARDDISK_4a32226a-f2aa-4275-90ab-4bb1f7d2ca8f', 'scsi-SQEMU_QEMU_HARDDISK_4a32226a-f2aa-4275-90ab-4bb1f7d2ca8f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480174 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0813f2fe--0b5e--5f32--866c--c0f68041cbc1-osd--block--0813f2fe--0b5e--5f32--866c--c0f68041cbc1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UyvJR0-eWJa-VJQz-wPxK-2odC-cvUy-VOQer1', 'scsi-0QEMU_QEMU_HARDDISK_29b2bcaa-94cd-4bc7-8ad5-d8dbf0337361', 'scsi-SQEMU_QEMU_HARDDISK_29b2bcaa-94cd-4bc7-8ad5-d8dbf0337361'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480180 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480186 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c44a5407-fe6a-4dde-8ed4-2e9072e3ed0d', 'scsi-SQEMU_QEMU_HARDDISK_c44a5407-fe6a-4dde-8ed4-2e9072e3ed0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480192 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480202 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480212 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.480219 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480229 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480235 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480240 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480251 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part1', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part14', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part15', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part16', 'scsi-SQEMU_QEMU_HARDDISK_8048828a-f8fb-40bf-8a3c-f28dd7047b99-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480267 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f7952abd--f19d--5f54--b846--7c46d615b8fb-osd--block--f7952abd--f19d--5f54--b846--7c46d615b8fb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b82v9G-2Ska-2RTK-iDfN-Mq85-FRiq-DBlpZs', 'scsi-0QEMU_QEMU_HARDDISK_2e1c1462-5959-44f4-a623-e25e33d313c5', 'scsi-SQEMU_QEMU_HARDDISK_2e1c1462-5959-44f4-a623-e25e33d313c5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480273 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--056512d9--3a02--5302--afc2--fa0158449af3-osd--block--056512d9--3a02--5302--afc2--fa0158449af3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-52XQY0-ueRj-IBB7-FKHA-4Vnm-xluU-ldZA0L', 'scsi-0QEMU_QEMU_HARDDISK_28009f26-7505-45c2-833e-d396e3f8b400', 'scsi-SQEMU_QEMU_HARDDISK_28009f26-7505-45c2-833e-d396e3f8b400'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480279 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4aaa0c2-0099-489f-8e98-802ea2f51c85', 'scsi-SQEMU_QEMU_HARDDISK_e4aaa0c2-0099-489f-8e98-802ea2f51c85'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480294 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-00-03-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 00:58:51.480301 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.480307 | orchestrator | 2026-03-19 00:58:51.480313 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 00:58:51.480319 | orchestrator | Thursday 19 March 2026 00:57:06 +0000 (0:00:00.614) 0:00:17.321 ******** 2026-03-19 00:58:51.480325 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:58:51.480332 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:58:51.480337 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:58:51.480344 | orchestrator | 2026-03-19 00:58:51.480350 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 00:58:51.480488 | orchestrator | Thursday 19 March 2026 00:57:06 +0000 (0:00:00.677) 0:00:17.999 ******** 2026-03-19 00:58:51.480495 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:58:51.480501 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:58:51.480507 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:58:51.480513 | orchestrator | 2026-03-19 00:58:51.480519 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 00:58:51.480524 | orchestrator | Thursday 19 March 2026 00:57:07 +0000 (0:00:00.466) 0:00:18.466 ******** 2026-03-19 00:58:51.480529 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:58:51.480535 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:58:51.480540 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:58:51.480546 | orchestrator | 2026-03-19 00:58:51.480552 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 00:58:51.480558 | orchestrator | Thursday 19 March 2026 00:57:07 +0000 (0:00:00.623) 0:00:19.089 ******** 2026-03-19 00:58:51.480563 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.480569 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.480574 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.480580 | orchestrator | 2026-03-19 00:58:51.480591 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 00:58:51.480597 | orchestrator | Thursday 19 March 2026 00:57:08 +0000 (0:00:00.314) 0:00:19.404 ******** 2026-03-19 00:58:51.480604 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.480609 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.480615 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.480621 | orchestrator | 2026-03-19 00:58:51.480627 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 00:58:51.480633 | orchestrator | Thursday 19 March 2026 00:57:08 +0000 (0:00:00.480) 0:00:19.884 ******** 2026-03-19 00:58:51.480639 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.480645 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.480650 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.480656 | orchestrator | 2026-03-19 00:58:51.480662 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 00:58:51.480667 | orchestrator | Thursday 19 March 2026 00:57:09 +0000 (0:00:00.542) 0:00:20.426 ******** 2026-03-19 00:58:51.480673 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-19 00:58:51.480679 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-19 00:58:51.480685 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-19 00:58:51.480697 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-19 00:58:51.480703 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-19 00:58:51.480708 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-19 00:58:51.480715 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-19 00:58:51.480721 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-19 00:58:51.480726 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-19 00:58:51.480733 | orchestrator | 2026-03-19 00:58:51.480738 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 00:58:51.480744 | orchestrator | Thursday 19 March 2026 00:57:10 +0000 (0:00:00.969) 0:00:21.396 ******** 2026-03-19 00:58:51.480750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 00:58:51.480756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 00:58:51.480761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 00:58:51.480767 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.480772 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-19 00:58:51.480778 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-19 00:58:51.480783 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-19 00:58:51.480789 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.480794 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-19 00:58:51.480800 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-19 00:58:51.480806 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-19 00:58:51.480811 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.480816 | orchestrator | 2026-03-19 00:58:51.480822 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 00:58:51.480827 | orchestrator | Thursday 19 March 2026 00:57:10 +0000 (0:00:00.449) 0:00:21.846 ******** 2026-03-19 00:58:51.480834 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 00:58:51.480841 | orchestrator | 2026-03-19 00:58:51.480855 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 00:58:51.480863 | orchestrator | Thursday 19 March 2026 00:57:11 +0000 (0:00:00.742) 0:00:22.589 ******** 2026-03-19 00:58:51.480868 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.480874 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.480880 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.480885 | orchestrator | 2026-03-19 00:58:51.480890 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 00:58:51.480896 | orchestrator | Thursday 19 March 2026 00:57:11 +0000 (0:00:00.377) 0:00:22.966 ******** 2026-03-19 00:58:51.480901 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.480907 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.480913 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.480919 | orchestrator | 2026-03-19 00:58:51.480925 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 00:58:51.480931 | orchestrator | Thursday 19 March 2026 00:57:12 +0000 (0:00:00.269) 0:00:23.236 ******** 2026-03-19 00:58:51.480937 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.480943 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.480948 | orchestrator | skipping: [testbed-node-5] 2026-03-19 00:58:51.480954 | orchestrator | 2026-03-19 00:58:51.480959 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 00:58:51.480965 | orchestrator | Thursday 19 March 2026 00:57:12 +0000 (0:00:00.262) 0:00:23.499 ******** 2026-03-19 00:58:51.480970 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:58:51.480976 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:58:51.480981 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:58:51.480987 | orchestrator | 2026-03-19 00:58:51.480999 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 00:58:51.481005 | orchestrator | Thursday 19 March 2026 00:57:12 +0000 (0:00:00.483) 0:00:23.982 ******** 2026-03-19 00:58:51.481010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:58:51.481015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:58:51.481021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:58:51.481026 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.481032 | orchestrator | 2026-03-19 00:58:51.481037 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 00:58:51.481042 | orchestrator | Thursday 19 March 2026 00:57:13 +0000 (0:00:00.344) 0:00:24.326 ******** 2026-03-19 00:58:51.481048 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:58:51.481057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:58:51.481062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:58:51.481068 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.481073 | orchestrator | 2026-03-19 00:58:51.481079 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 00:58:51.481084 | orchestrator | Thursday 19 March 2026 00:57:13 +0000 (0:00:00.334) 0:00:24.661 ******** 2026-03-19 00:58:51.481090 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 00:58:51.481096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 00:58:51.481101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 00:58:51.481107 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.481113 | orchestrator | 2026-03-19 00:58:51.481118 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 00:58:51.481123 | orchestrator | Thursday 19 March 2026 00:57:13 +0000 (0:00:00.304) 0:00:24.965 ******** 2026-03-19 00:58:51.481129 | orchestrator | ok: [testbed-node-3] 2026-03-19 00:58:51.481135 | orchestrator | ok: [testbed-node-4] 2026-03-19 00:58:51.481140 | orchestrator | ok: [testbed-node-5] 2026-03-19 00:58:51.481146 | orchestrator | 2026-03-19 00:58:51.481151 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 00:58:51.481157 | orchestrator | Thursday 19 March 2026 00:57:14 +0000 (0:00:00.302) 0:00:25.268 ******** 2026-03-19 00:58:51.481163 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 00:58:51.481169 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-19 00:58:51.481175 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-19 00:58:51.481180 | orchestrator | 2026-03-19 00:58:51.481186 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 00:58:51.481191 | orchestrator | Thursday 19 March 2026 00:57:14 +0000 (0:00:00.454) 0:00:25.723 ******** 2026-03-19 00:58:51.481196 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 00:58:51.481202 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 00:58:51.481207 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 00:58:51.481213 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-19 00:58:51.481218 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 00:58:51.481224 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 00:58:51.481230 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 00:58:51.481236 | orchestrator | 2026-03-19 00:58:51.481241 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 00:58:51.481247 | orchestrator | Thursday 19 March 2026 00:57:15 +0000 (0:00:00.798) 0:00:26.522 ******** 2026-03-19 00:58:51.481252 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 00:58:51.481258 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 00:58:51.481300 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 00:58:51.481307 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-19 00:58:51.481320 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 00:58:51.481327 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 00:58:51.481333 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 00:58:51.481339 | orchestrator | 2026-03-19 00:58:51.481346 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-19 00:58:51.481386 | orchestrator | Thursday 19 March 2026 00:57:16 +0000 (0:00:01.592) 0:00:28.114 ******** 2026-03-19 00:58:51.481391 | orchestrator | skipping: [testbed-node-3] 2026-03-19 00:58:51.481395 | orchestrator | skipping: [testbed-node-4] 2026-03-19 00:58:51.481399 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-19 00:58:51.481403 | orchestrator | 2026-03-19 00:58:51.481407 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-19 00:58:51.481411 | orchestrator | Thursday 19 March 2026 00:57:17 +0000 (0:00:00.326) 0:00:28.441 ******** 2026-03-19 00:58:51.481417 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-19 00:58:51.481423 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-19 00:58:51.481427 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-19 00:58:51.481435 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-19 00:58:51.481439 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-19 00:58:51.481443 | orchestrator | 2026-03-19 00:58:51.481447 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-19 00:58:51.481450 | orchestrator | Thursday 19 March 2026 00:58:00 +0000 (0:00:43.626) 0:01:12.068 ******** 2026-03-19 00:58:51.481454 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481458 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481462 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481465 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481469 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481473 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481477 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-19 00:58:51.481480 | orchestrator | 2026-03-19 00:58:51.481489 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-19 00:58:51.481493 | orchestrator | Thursday 19 March 2026 00:58:23 +0000 (0:00:22.855) 0:01:34.923 ******** 2026-03-19 00:58:51.481496 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481500 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481504 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481508 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481511 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481515 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481519 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 00:58:51.481523 | orchestrator | 2026-03-19 00:58:51.481526 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-19 00:58:51.481530 | orchestrator | Thursday 19 March 2026 00:58:35 +0000 (0:00:11.362) 0:01:46.285 ******** 2026-03-19 00:58:51.481534 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481538 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 00:58:51.481541 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 00:58:51.481549 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481553 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 00:58:51.481557 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 00:58:51.481561 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481565 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 00:58:51.481568 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 00:58:51.481572 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481577 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 00:58:51.481583 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 00:58:51.481589 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481596 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 00:58:51.481606 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 00:58:51.481612 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 00:58:51.481617 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 00:58:51.481624 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 00:58:51.481629 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-19 00:58:51.481635 | orchestrator | 2026-03-19 00:58:51.481640 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:58:51.481646 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-19 00:58:51.481655 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-19 00:58:51.481667 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-19 00:58:51.481673 | orchestrator | 2026-03-19 00:58:51.481684 | orchestrator | 2026-03-19 00:58:51.481689 | orchestrator | 2026-03-19 00:58:51.481695 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:58:51.481701 | orchestrator | Thursday 19 March 2026 00:58:50 +0000 (0:00:15.673) 0:02:01.959 ******** 2026-03-19 00:58:51.481707 | orchestrator | =============================================================================== 2026-03-19 00:58:51.481712 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.63s 2026-03-19 00:58:51.481718 | orchestrator | generate keys ---------------------------------------------------------- 22.86s 2026-03-19 00:58:51.481723 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 15.67s 2026-03-19 00:58:51.481729 | orchestrator | get keys from monitors ------------------------------------------------- 11.36s 2026-03-19 00:58:51.481735 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.89s 2026-03-19 00:58:51.481740 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.63s 2026-03-19 00:58:51.481746 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.59s 2026-03-19 00:58:51.481751 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.97s 2026-03-19 00:58:51.481757 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.93s 2026-03-19 00:58:51.481763 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.80s 2026-03-19 00:58:51.481767 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.79s 2026-03-19 00:58:51.481771 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.74s 2026-03-19 00:58:51.481775 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.68s 2026-03-19 00:58:51.481778 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.66s 2026-03-19 00:58:51.481784 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.66s 2026-03-19 00:58:51.481789 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.65s 2026-03-19 00:58:51.481795 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.62s 2026-03-19 00:58:51.481800 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.61s 2026-03-19 00:58:51.481809 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.58s 2026-03-19 00:58:51.481817 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.54s 2026-03-19 00:58:51.481822 | orchestrator | 2026-03-19 00:58:51 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:51.481830 | orchestrator | 2026-03-19 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:54.535497 | orchestrator | 2026-03-19 00:58:54 | INFO  | Task e1ccc72c-f365-4f81-9ce5-c3da20115a6e is in state STARTED 2026-03-19 00:58:54.537194 | orchestrator | 2026-03-19 00:58:54 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:54.537684 | orchestrator | 2026-03-19 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:58:57.588092 | orchestrator | 2026-03-19 00:58:57 | INFO  | Task e1ccc72c-f365-4f81-9ce5-c3da20115a6e is in state STARTED 2026-03-19 00:58:57.590080 | orchestrator | 2026-03-19 00:58:57 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:58:57.590127 | orchestrator | 2026-03-19 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:00.634463 | orchestrator | 2026-03-19 00:59:00 | INFO  | Task e1ccc72c-f365-4f81-9ce5-c3da20115a6e is in state STARTED 2026-03-19 00:59:00.636935 | orchestrator | 2026-03-19 00:59:00 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:59:00.636998 | orchestrator | 2026-03-19 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:03.679247 | orchestrator | 2026-03-19 00:59:03 | INFO  | Task e1ccc72c-f365-4f81-9ce5-c3da20115a6e is in state STARTED 2026-03-19 00:59:03.680638 | orchestrator | 2026-03-19 00:59:03 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:59:03.680692 | orchestrator | 2026-03-19 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:06.730653 | orchestrator | 2026-03-19 00:59:06 | INFO  | Task e1ccc72c-f365-4f81-9ce5-c3da20115a6e is in state STARTED 2026-03-19 00:59:06.730882 | orchestrator | 2026-03-19 00:59:06 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:59:06.731122 | orchestrator | 2026-03-19 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:09.784247 | orchestrator | 2026-03-19 00:59:09 | INFO  | Task e1ccc72c-f365-4f81-9ce5-c3da20115a6e is in state STARTED 2026-03-19 00:59:09.785972 | orchestrator | 2026-03-19 00:59:09 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:59:09.786167 | orchestrator | 2026-03-19 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:12.833492 | orchestrator | 2026-03-19 00:59:12 | INFO  | Task e1ccc72c-f365-4f81-9ce5-c3da20115a6e is in state STARTED 2026-03-19 00:59:12.834727 | orchestrator | 2026-03-19 00:59:12 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:59:12.834997 | orchestrator | 2026-03-19 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:15.880063 | orchestrator | 2026-03-19 00:59:15 | INFO  | Task e1ccc72c-f365-4f81-9ce5-c3da20115a6e is in state STARTED 2026-03-19 00:59:15.880872 | orchestrator | 2026-03-19 00:59:15 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:59:15.880960 | orchestrator | 2026-03-19 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:18.923107 | orchestrator | 2026-03-19 00:59:18 | INFO  | Task e1ccc72c-f365-4f81-9ce5-c3da20115a6e is in state STARTED 2026-03-19 00:59:18.925704 | orchestrator | 2026-03-19 00:59:18 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:59:18.925782 | orchestrator | 2026-03-19 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:21.968411 | orchestrator | 2026-03-19 00:59:21 | INFO  | Task e1ccc72c-f365-4f81-9ce5-c3da20115a6e is in state STARTED 2026-03-19 00:59:21.968588 | orchestrator | 2026-03-19 00:59:21 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state STARTED 2026-03-19 00:59:21.968601 | orchestrator | 2026-03-19 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:25.049418 | orchestrator | 2026-03-19 00:59:25 | INFO  | Task e1ccc72c-f365-4f81-9ce5-c3da20115a6e is in state STARTED 2026-03-19 00:59:25.049513 | orchestrator | 2026-03-19 00:59:25 | INFO  | Task 838f5be4-2226-45e9-9a14-8c6fe724c4c3 is in state SUCCESS 2026-03-19 00:59:25.050154 | orchestrator | 2026-03-19 00:59:25.050190 | orchestrator | 2026-03-19 00:59:25.050202 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 00:59:25.050210 | orchestrator | 2026-03-19 00:59:25.050217 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 00:59:25.050225 | orchestrator | Thursday 19 March 2026 00:56:54 +0000 (0:00:00.302) 0:00:00.302 ******** 2026-03-19 00:59:25.050232 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:59:25.050240 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:59:25.050247 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:59:25.050253 | orchestrator | 2026-03-19 00:59:25.050260 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 00:59:25.050267 | orchestrator | Thursday 19 March 2026 00:56:54 +0000 (0:00:00.236) 0:00:00.539 ******** 2026-03-19 00:59:25.050425 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-19 00:59:25.050444 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-19 00:59:25.050457 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-19 00:59:25.050466 | orchestrator | 2026-03-19 00:59:25.050473 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-19 00:59:25.050509 | orchestrator | 2026-03-19 00:59:25.050563 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-19 00:59:25.050572 | orchestrator | Thursday 19 March 2026 00:56:54 +0000 (0:00:00.257) 0:00:00.796 ******** 2026-03-19 00:59:25.050579 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:59:25.050587 | orchestrator | 2026-03-19 00:59:25.050594 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-19 00:59:25.050600 | orchestrator | Thursday 19 March 2026 00:56:55 +0000 (0:00:00.462) 0:00:01.258 ******** 2026-03-19 00:59:25.050613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:59:25.050853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:59:25.050904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:59:25.050926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 00:59:25.050934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 00:59:25.050940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 00:59:25.050952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 00:59:25.050957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 00:59:25.050961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 00:59:25.050965 | orchestrator | 2026-03-19 00:59:25.050969 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-19 00:59:25.050980 | orchestrator | Thursday 19 March 2026 00:56:57 +0000 (0:00:02.256) 0:00:03.514 ******** 2026-03-19 00:59:25.050984 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.050989 | orchestrator | 2026-03-19 00:59:25.050996 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-19 00:59:25.051000 | orchestrator | Thursday 19 March 2026 00:56:57 +0000 (0:00:00.107) 0:00:03.622 ******** 2026-03-19 00:59:25.051004 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.051008 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:59:25.051012 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:59:25.051015 | orchestrator | 2026-03-19 00:59:25.051019 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-19 00:59:25.051023 | orchestrator | Thursday 19 March 2026 00:56:57 +0000 (0:00:00.226) 0:00:03.849 ******** 2026-03-19 00:59:25.051026 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 00:59:25.051030 | orchestrator | 2026-03-19 00:59:25.051034 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-19 00:59:25.051038 | orchestrator | Thursday 19 March 2026 00:56:58 +0000 (0:00:00.796) 0:00:04.645 ******** 2026-03-19 00:59:25.051041 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:59:25.051045 | orchestrator | 2026-03-19 00:59:25.051049 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-19 00:59:25.051053 | orchestrator | Thursday 19 March 2026 00:56:59 +0000 (0:00:00.594) 0:00:05.239 ******** 2026-03-19 00:59:25.051058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:59:25.051065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:59:25.051070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:59:25.051084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 00:59:25.051088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 00:59:25.051092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 00:59:25.051096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 00:59:25.051103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 00:59:25.051107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 00:59:25.051115 | orchestrator | 2026-03-19 00:59:25.051119 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-19 00:59:25.051123 | orchestrator | Thursday 19 March 2026 00:57:02 +0000 (0:00:02.904) 0:00:08.144 ******** 2026-03-19 00:59:25.051131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 00:59:25.051136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 00:59:25.051140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:59:25.051147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 00:59:25.051151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:59:25.051158 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:59:25.051163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 00:59:25.051166 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.051174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 00:59:25.051178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:59:25.051182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 00:59:25.051186 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:59:25.051190 | orchestrator | 2026-03-19 00:59:25.051194 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-19 00:59:25.051200 | orchestrator | Thursday 19 March 2026 00:57:02 +0000 (0:00:00.641) 0:00:08.785 ******** 2026-03-19 00:59:25.051204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 00:59:25.051212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:59:25.051218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 00:59:25.051223 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.051227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 00:59:25.051231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:59:25.051238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 00:59:25.051249 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:59:25.051253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 00:59:25.051261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:59:25.051265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 00:59:25.051269 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:59:25.051273 | orchestrator | 2026-03-19 00:59:25.051277 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-19 00:59:25.051281 | orchestrator | Thursday 19 March 2026 00:57:03 +0000 (0:00:01.026) 0:00:09.811 ******** 2026-03-19 00:59:25.051351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:59:25.051366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:59:25.051374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:59:25.051378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 00:59:25.051383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 00:59:25.051386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 00:59:25.051397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 00:59:25.051401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 00:59:25.051405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 00:59:25.051409 | orchestrator | 2026-03-19 00:59:25.051413 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-19 00:59:25.051417 | orchestrator | Thursday 19 March 2026 00:57:06 +0000 (0:00:03.264) 0:00:13.076 ******** 2026-03-19 00:59:25.051425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:59:25.051430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:59:25.051437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:59:25.051444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:59:25.051451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:59:25.051455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:59:25.051459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 00:59:25.051463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 00:59:25.051474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 00:59:25.051479 | orchestrator | 2026-03-19 00:59:25.051483 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-19 00:59:25.051488 | orchestrator | Thursday 19 March 2026 00:57:13 +0000 (0:00:06.146) 0:00:19.222 ******** 2026-03-19 00:59:25.051492 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:59:25.051496 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:59:25.051500 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:59:25.051505 | orchestrator | 2026-03-19 00:59:25.051509 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-19 00:59:25.051513 | orchestrator | Thursday 19 March 2026 00:57:14 +0000 (0:00:01.265) 0:00:20.488 ******** 2026-03-19 00:59:25.051517 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.051522 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:59:25.051526 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:59:25.051530 | orchestrator | 2026-03-19 00:59:25.051535 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-19 00:59:25.051539 | orchestrator | Thursday 19 March 2026 00:57:15 +0000 (0:00:00.718) 0:00:21.206 ******** 2026-03-19 00:59:25.051543 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.051547 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:59:25.051552 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:59:25.051556 | orchestrator | 2026-03-19 00:59:25.051561 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-19 00:59:25.051565 | orchestrator | Thursday 19 March 2026 00:57:15 +0000 (0:00:00.305) 0:00:21.512 ******** 2026-03-19 00:59:25.051569 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.051574 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:59:25.051578 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:59:25.051582 | orchestrator | 2026-03-19 00:59:25.051587 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-19 00:59:25.051591 | orchestrator | Thursday 19 March 2026 00:57:15 +0000 (0:00:00.269) 0:00:21.781 ******** 2026-03-19 00:59:25.051600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 00:59:25.051608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:59:25.051620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 00:59:25.051627 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.051641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 00:59:25.051650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:59:25.051662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 00:59:25.051668 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:59:25.051675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 00:59:25.051688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 00:59:25.051696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 00:59:25.051707 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:59:25.051715 | orchestrator | 2026-03-19 00:59:25.051721 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-19 00:59:25.051727 | orchestrator | Thursday 19 March 2026 00:57:16 +0000 (0:00:00.504) 0:00:22.286 ******** 2026-03-19 00:59:25.051734 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.051740 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:59:25.051746 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:59:25.051752 | orchestrator | 2026-03-19 00:59:25.051759 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-19 00:59:25.051765 | orchestrator | Thursday 19 March 2026 00:57:16 +0000 (0:00:00.417) 0:00:22.704 ******** 2026-03-19 00:59:25.051771 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-19 00:59:25.051778 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-19 00:59:25.051784 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-19 00:59:25.051790 | orchestrator | 2026-03-19 00:59:25.051797 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-19 00:59:25.051803 | orchestrator | Thursday 19 March 2026 00:57:18 +0000 (0:00:01.476) 0:00:24.181 ******** 2026-03-19 00:59:25.051809 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 00:59:25.051815 | orchestrator | 2026-03-19 00:59:25.051820 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-19 00:59:25.051826 | orchestrator | Thursday 19 March 2026 00:57:19 +0000 (0:00:00.969) 0:00:25.150 ******** 2026-03-19 00:59:25.051832 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.051838 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:59:25.051844 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:59:25.051849 | orchestrator | 2026-03-19 00:59:25.051855 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-19 00:59:25.051861 | orchestrator | Thursday 19 March 2026 00:57:19 +0000 (0:00:00.509) 0:00:25.659 ******** 2026-03-19 00:59:25.051875 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-19 00:59:25.051881 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-19 00:59:25.051887 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 00:59:25.051893 | orchestrator | 2026-03-19 00:59:25.051899 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-19 00:59:25.051909 | orchestrator | Thursday 19 March 2026 00:57:20 +0000 (0:00:01.387) 0:00:27.047 ******** 2026-03-19 00:59:25.051916 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:59:25.051922 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:59:25.051928 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:59:25.051935 | orchestrator | 2026-03-19 00:59:25.051941 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-19 00:59:25.051947 | orchestrator | Thursday 19 March 2026 00:57:21 +0000 (0:00:00.432) 0:00:27.480 ******** 2026-03-19 00:59:25.051953 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-19 00:59:25.051959 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-19 00:59:25.051967 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-19 00:59:25.051973 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-19 00:59:25.051979 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-19 00:59:25.051984 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-19 00:59:25.051990 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-19 00:59:25.051996 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-19 00:59:25.052002 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-19 00:59:25.052008 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-19 00:59:25.052014 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-19 00:59:25.052021 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-19 00:59:25.052027 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-19 00:59:25.052034 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-19 00:59:25.052041 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-19 00:59:25.052048 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-19 00:59:25.052054 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-19 00:59:25.052060 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-19 00:59:25.052067 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-19 00:59:25.052073 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-19 00:59:25.052080 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-19 00:59:25.052087 | orchestrator | 2026-03-19 00:59:25.052095 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-19 00:59:25.052099 | orchestrator | Thursday 19 March 2026 00:57:28 +0000 (0:00:07.565) 0:00:35.045 ******** 2026-03-19 00:59:25.052102 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-19 00:59:25.052106 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-19 00:59:25.052117 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-19 00:59:25.052121 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-19 00:59:25.052125 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-19 00:59:25.052128 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-19 00:59:25.052132 | orchestrator | 2026-03-19 00:59:25.052136 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-19 00:59:25.052140 | orchestrator | Thursday 19 March 2026 00:57:31 +0000 (0:00:02.323) 0:00:37.369 ******** 2026-03-19 00:59:25.052148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:59:25.052153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:59:25.052157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 00:59:25.052165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 00:59:25.052173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 00:59:25.052177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 00:59:25.052184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 00:59:25.052188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 00:59:25.052192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 00:59:25.052196 | orchestrator | 2026-03-19 00:59:25.052200 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-19 00:59:25.052204 | orchestrator | Thursday 19 March 2026 00:57:33 +0000 (0:00:02.066) 0:00:39.435 ******** 2026-03-19 00:59:25.052208 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.052211 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:59:25.052215 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:59:25.052222 | orchestrator | 2026-03-19 00:59:25.052226 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-19 00:59:25.052230 | orchestrator | Thursday 19 March 2026 00:57:33 +0000 (0:00:00.459) 0:00:39.894 ******** 2026-03-19 00:59:25.052233 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:59:25.052237 | orchestrator | 2026-03-19 00:59:25.052241 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-19 00:59:25.052247 | orchestrator | Thursday 19 March 2026 00:57:35 +0000 (0:00:02.147) 0:00:42.041 ******** 2026-03-19 00:59:25.052251 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:59:25.052255 | orchestrator | 2026-03-19 00:59:25.052259 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-19 00:59:25.052262 | orchestrator | Thursday 19 March 2026 00:57:38 +0000 (0:00:02.658) 0:00:44.700 ******** 2026-03-19 00:59:25.052266 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:59:25.052270 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:59:25.052274 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:59:25.052277 | orchestrator | 2026-03-19 00:59:25.052299 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-19 00:59:25.052307 | orchestrator | Thursday 19 March 2026 00:57:39 +0000 (0:00:00.897) 0:00:45.598 ******** 2026-03-19 00:59:25.052314 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:59:25.052318 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:59:25.052322 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:59:25.052326 | orchestrator | 2026-03-19 00:59:25.052329 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-19 00:59:25.052333 | orchestrator | Thursday 19 March 2026 00:57:39 +0000 (0:00:00.293) 0:00:45.891 ******** 2026-03-19 00:59:25.052337 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.052341 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:59:25.052345 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:59:25.052348 | orchestrator | 2026-03-19 00:59:25.052352 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-19 00:59:25.052356 | orchestrator | Thursday 19 March 2026 00:57:40 +0000 (0:00:00.417) 0:00:46.309 ******** 2026-03-19 00:59:25.052360 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:59:25.052363 | orchestrator | 2026-03-19 00:59:25.052367 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-19 00:59:25.052371 | orchestrator | Thursday 19 March 2026 00:57:56 +0000 (0:00:15.918) 0:01:02.228 ******** 2026-03-19 00:59:25.052374 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:59:25.052378 | orchestrator | 2026-03-19 00:59:25.052382 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-19 00:59:25.052386 | orchestrator | Thursday 19 March 2026 00:58:06 +0000 (0:00:10.880) 0:01:13.109 ******** 2026-03-19 00:59:25.052389 | orchestrator | 2026-03-19 00:59:25.052393 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-19 00:59:25.052397 | orchestrator | Thursday 19 March 2026 00:58:07 +0000 (0:00:00.075) 0:01:13.184 ******** 2026-03-19 00:59:25.052401 | orchestrator | 2026-03-19 00:59:25.052404 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-19 00:59:25.052412 | orchestrator | Thursday 19 March 2026 00:58:07 +0000 (0:00:00.075) 0:01:13.260 ******** 2026-03-19 00:59:25.052415 | orchestrator | 2026-03-19 00:59:25.052419 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-19 00:59:25.052423 | orchestrator | Thursday 19 March 2026 00:58:07 +0000 (0:00:00.076) 0:01:13.336 ******** 2026-03-19 00:59:25.052427 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:59:25.052431 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:59:25.052434 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:59:25.052438 | orchestrator | 2026-03-19 00:59:25.052442 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-19 00:59:25.052446 | orchestrator | Thursday 19 March 2026 00:58:16 +0000 (0:00:09.237) 0:01:22.573 ******** 2026-03-19 00:59:25.052449 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:59:25.052458 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:59:25.052461 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:59:25.052465 | orchestrator | 2026-03-19 00:59:25.052469 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-19 00:59:25.052473 | orchestrator | Thursday 19 March 2026 00:58:25 +0000 (0:00:09.450) 0:01:32.024 ******** 2026-03-19 00:59:25.052476 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:59:25.052481 | orchestrator | changed: [testbed-node-1] 2026-03-19 00:59:25.052487 | orchestrator | changed: [testbed-node-2] 2026-03-19 00:59:25.052493 | orchestrator | 2026-03-19 00:59:25.052502 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-19 00:59:25.052511 | orchestrator | Thursday 19 March 2026 00:58:31 +0000 (0:00:05.653) 0:01:37.678 ******** 2026-03-19 00:59:25.052517 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 00:59:25.052522 | orchestrator | 2026-03-19 00:59:25.052528 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-19 00:59:25.052533 | orchestrator | Thursday 19 March 2026 00:58:32 +0000 (0:00:00.697) 0:01:38.375 ******** 2026-03-19 00:59:25.052539 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:59:25.052545 | orchestrator | ok: [testbed-node-1] 2026-03-19 00:59:25.052550 | orchestrator | ok: [testbed-node-2] 2026-03-19 00:59:25.052556 | orchestrator | 2026-03-19 00:59:25.052561 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-19 00:59:25.052566 | orchestrator | Thursday 19 March 2026 00:58:32 +0000 (0:00:00.671) 0:01:39.047 ******** 2026-03-19 00:59:25.052571 | orchestrator | changed: [testbed-node-0] 2026-03-19 00:59:25.052577 | orchestrator | 2026-03-19 00:59:25.052582 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-19 00:59:25.052588 | orchestrator | Thursday 19 March 2026 00:58:34 +0000 (0:00:01.532) 0:01:40.580 ******** 2026-03-19 00:59:25.052594 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-19 00:59:25.052599 | orchestrator | 2026-03-19 00:59:25.052605 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-19 00:59:25.052612 | orchestrator | Thursday 19 March 2026 00:58:45 +0000 (0:00:10.549) 0:01:51.129 ******** 2026-03-19 00:59:25.052618 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-19 00:59:25.052624 | orchestrator | 2026-03-19 00:59:25.052630 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-19 00:59:25.052636 | orchestrator | Thursday 19 March 2026 00:59:12 +0000 (0:00:27.486) 0:02:18.615 ******** 2026-03-19 00:59:25.052642 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-19 00:59:25.052652 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-19 00:59:25.052656 | orchestrator | 2026-03-19 00:59:25.052660 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-19 00:59:25.052664 | orchestrator | Thursday 19 March 2026 00:59:19 +0000 (0:00:06.554) 0:02:25.170 ******** 2026-03-19 00:59:25.052667 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.052671 | orchestrator | 2026-03-19 00:59:25.052675 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-19 00:59:25.052678 | orchestrator | Thursday 19 March 2026 00:59:19 +0000 (0:00:00.137) 0:02:25.308 ******** 2026-03-19 00:59:25.052682 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.052686 | orchestrator | 2026-03-19 00:59:25.052690 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-19 00:59:25.052693 | orchestrator | Thursday 19 March 2026 00:59:19 +0000 (0:00:00.122) 0:02:25.430 ******** 2026-03-19 00:59:25.052697 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.052701 | orchestrator | 2026-03-19 00:59:25.052705 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-19 00:59:25.052708 | orchestrator | Thursday 19 March 2026 00:59:19 +0000 (0:00:00.134) 0:02:25.565 ******** 2026-03-19 00:59:25.052717 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.052721 | orchestrator | 2026-03-19 00:59:25.052724 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-19 00:59:25.052728 | orchestrator | Thursday 19 March 2026 00:59:19 +0000 (0:00:00.301) 0:02:25.866 ******** 2026-03-19 00:59:25.052732 | orchestrator | ok: [testbed-node-0] 2026-03-19 00:59:25.052736 | orchestrator | 2026-03-19 00:59:25.052739 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-19 00:59:25.052743 | orchestrator | Thursday 19 March 2026 00:59:23 +0000 (0:00:03.825) 0:02:29.692 ******** 2026-03-19 00:59:25.052747 | orchestrator | skipping: [testbed-node-0] 2026-03-19 00:59:25.052751 | orchestrator | skipping: [testbed-node-1] 2026-03-19 00:59:25.052755 | orchestrator | skipping: [testbed-node-2] 2026-03-19 00:59:25.052758 | orchestrator | 2026-03-19 00:59:25.052762 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 00:59:25.052767 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-19 00:59:25.052775 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-19 00:59:25.052780 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-19 00:59:25.052786 | orchestrator | 2026-03-19 00:59:25.052794 | orchestrator | 2026-03-19 00:59:25.052801 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 00:59:25.052807 | orchestrator | Thursday 19 March 2026 00:59:24 +0000 (0:00:00.573) 0:02:30.266 ******** 2026-03-19 00:59:25.052813 | orchestrator | =============================================================================== 2026-03-19 00:59:25.052818 | orchestrator | service-ks-register : keystone | Creating services --------------------- 27.49s 2026-03-19 00:59:25.052824 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.92s 2026-03-19 00:59:25.052830 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.88s 2026-03-19 00:59:25.052835 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.55s 2026-03-19 00:59:25.052841 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.45s 2026-03-19 00:59:25.052847 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 9.24s 2026-03-19 00:59:25.052854 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 7.57s 2026-03-19 00:59:25.052860 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.55s 2026-03-19 00:59:25.052867 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.15s 2026-03-19 00:59:25.052872 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.65s 2026-03-19 00:59:25.052877 | orchestrator | keystone : Creating default user role ----------------------------------- 3.83s 2026-03-19 00:59:25.052882 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.26s 2026-03-19 00:59:25.052888 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.90s 2026-03-19 00:59:25.052893 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.66s 2026-03-19 00:59:25.052899 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.32s 2026-03-19 00:59:25.052904 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.26s 2026-03-19 00:59:25.052911 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.15s 2026-03-19 00:59:25.052916 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.07s 2026-03-19 00:59:25.052922 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.53s 2026-03-19 00:59:25.052928 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.48s 2026-03-19 00:59:25.052939 | orchestrator | 2026-03-19 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:28.081773 | orchestrator | 2026-03-19 00:59:28 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 00:59:28.082395 | orchestrator | 2026-03-19 00:59:28 | INFO  | Task e1ccc72c-f365-4f81-9ce5-c3da20115a6e is in state STARTED 2026-03-19 00:59:28.084617 | orchestrator | 2026-03-19 00:59:28 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 00:59:28.085309 | orchestrator | 2026-03-19 00:59:28 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 00:59:28.086242 | orchestrator | 2026-03-19 00:59:28 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 00:59:28.086273 | orchestrator | 2026-03-19 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:31.113513 | orchestrator | 2026-03-19 00:59:31 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 00:59:31.113993 | orchestrator | 2026-03-19 00:59:31 | INFO  | Task e1ccc72c-f365-4f81-9ce5-c3da20115a6e is in state SUCCESS 2026-03-19 00:59:31.115578 | orchestrator | 2026-03-19 00:59:31 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 00:59:31.116286 | orchestrator | 2026-03-19 00:59:31 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 00:59:31.117166 | orchestrator | 2026-03-19 00:59:31 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 00:59:31.118224 | orchestrator | 2026-03-19 00:59:31 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 00:59:31.118260 | orchestrator | 2026-03-19 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:34.153013 | orchestrator | 2026-03-19 00:59:34 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 00:59:34.154864 | orchestrator | 2026-03-19 00:59:34 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 00:59:34.155219 | orchestrator | 2026-03-19 00:59:34 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 00:59:34.156609 | orchestrator | 2026-03-19 00:59:34 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 00:59:34.156958 | orchestrator | 2026-03-19 00:59:34 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 00:59:34.156991 | orchestrator | 2026-03-19 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:37.202821 | orchestrator | 2026-03-19 00:59:37 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 00:59:37.202948 | orchestrator | 2026-03-19 00:59:37 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 00:59:37.204308 | orchestrator | 2026-03-19 00:59:37 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 00:59:37.207210 | orchestrator | 2026-03-19 00:59:37 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 00:59:37.209905 | orchestrator | 2026-03-19 00:59:37 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 00:59:37.210521 | orchestrator | 2026-03-19 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:40.240493 | orchestrator | 2026-03-19 00:59:40 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 00:59:40.241747 | orchestrator | 2026-03-19 00:59:40 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 00:59:40.243627 | orchestrator | 2026-03-19 00:59:40 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 00:59:40.244664 | orchestrator | 2026-03-19 00:59:40 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 00:59:40.245947 | orchestrator | 2026-03-19 00:59:40 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 00:59:40.245989 | orchestrator | 2026-03-19 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:43.285547 | orchestrator | 2026-03-19 00:59:43 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 00:59:43.291181 | orchestrator | 2026-03-19 00:59:43 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 00:59:43.294504 | orchestrator | 2026-03-19 00:59:43 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 00:59:43.296523 | orchestrator | 2026-03-19 00:59:43 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 00:59:43.298363 | orchestrator | 2026-03-19 00:59:43 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 00:59:43.298422 | orchestrator | 2026-03-19 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:46.346413 | orchestrator | 2026-03-19 00:59:46 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 00:59:46.348989 | orchestrator | 2026-03-19 00:59:46 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 00:59:46.350833 | orchestrator | 2026-03-19 00:59:46 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 00:59:46.351632 | orchestrator | 2026-03-19 00:59:46 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 00:59:46.352961 | orchestrator | 2026-03-19 00:59:46 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 00:59:46.352997 | orchestrator | 2026-03-19 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:49.403989 | orchestrator | 2026-03-19 00:59:49 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 00:59:49.406388 | orchestrator | 2026-03-19 00:59:49 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 00:59:49.408861 | orchestrator | 2026-03-19 00:59:49 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 00:59:49.410469 | orchestrator | 2026-03-19 00:59:49 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 00:59:49.413416 | orchestrator | 2026-03-19 00:59:49 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 00:59:49.413452 | orchestrator | 2026-03-19 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:52.463046 | orchestrator | 2026-03-19 00:59:52 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 00:59:52.465857 | orchestrator | 2026-03-19 00:59:52 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 00:59:52.468738 | orchestrator | 2026-03-19 00:59:52 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 00:59:52.471163 | orchestrator | 2026-03-19 00:59:52 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 00:59:52.473089 | orchestrator | 2026-03-19 00:59:52 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 00:59:52.473619 | orchestrator | 2026-03-19 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:55.506534 | orchestrator | 2026-03-19 00:59:55 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 00:59:55.508311 | orchestrator | 2026-03-19 00:59:55 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 00:59:55.511870 | orchestrator | 2026-03-19 00:59:55 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 00:59:55.514258 | orchestrator | 2026-03-19 00:59:55 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 00:59:55.516067 | orchestrator | 2026-03-19 00:59:55 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 00:59:55.516129 | orchestrator | 2026-03-19 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-03-19 00:59:58.559636 | orchestrator | 2026-03-19 00:59:58 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 00:59:58.564366 | orchestrator | 2026-03-19 00:59:58 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 00:59:58.571280 | orchestrator | 2026-03-19 00:59:58 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 00:59:58.574342 | orchestrator | 2026-03-19 00:59:58 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 00:59:58.577463 | orchestrator | 2026-03-19 00:59:58 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 00:59:58.577541 | orchestrator | 2026-03-19 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:01.625474 | orchestrator | 2026-03-19 01:00:01 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:01.628344 | orchestrator | 2026-03-19 01:00:01 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:01.631038 | orchestrator | 2026-03-19 01:00:01 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 01:00:01.633944 | orchestrator | 2026-03-19 01:00:01 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:01.634473 | orchestrator | 2026-03-19 01:00:01 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:01.634525 | orchestrator | 2026-03-19 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:04.674135 | orchestrator | 2026-03-19 01:00:04 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:04.676954 | orchestrator | 2026-03-19 01:00:04 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:04.678130 | orchestrator | 2026-03-19 01:00:04 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 01:00:04.679339 | orchestrator | 2026-03-19 01:00:04 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:04.680093 | orchestrator | 2026-03-19 01:00:04 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:04.680124 | orchestrator | 2026-03-19 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:07.719723 | orchestrator | 2026-03-19 01:00:07 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:07.720377 | orchestrator | 2026-03-19 01:00:07 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:07.723856 | orchestrator | 2026-03-19 01:00:07 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 01:00:07.726396 | orchestrator | 2026-03-19 01:00:07 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:07.728229 | orchestrator | 2026-03-19 01:00:07 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:07.728276 | orchestrator | 2026-03-19 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:10.762089 | orchestrator | 2026-03-19 01:00:10 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:10.762169 | orchestrator | 2026-03-19 01:00:10 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:10.762177 | orchestrator | 2026-03-19 01:00:10 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 01:00:10.762182 | orchestrator | 2026-03-19 01:00:10 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:10.762207 | orchestrator | 2026-03-19 01:00:10 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:10.762211 | orchestrator | 2026-03-19 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:13.789360 | orchestrator | 2026-03-19 01:00:13 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:13.789556 | orchestrator | 2026-03-19 01:00:13 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:13.790704 | orchestrator | 2026-03-19 01:00:13 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 01:00:13.790924 | orchestrator | 2026-03-19 01:00:13 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:13.791594 | orchestrator | 2026-03-19 01:00:13 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:13.791626 | orchestrator | 2026-03-19 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:16.824479 | orchestrator | 2026-03-19 01:00:16 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:16.824618 | orchestrator | 2026-03-19 01:00:16 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:16.826148 | orchestrator | 2026-03-19 01:00:16 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 01:00:16.827845 | orchestrator | 2026-03-19 01:00:16 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:16.828582 | orchestrator | 2026-03-19 01:00:16 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:16.828615 | orchestrator | 2026-03-19 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:19.951897 | orchestrator | 2026-03-19 01:00:19 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:19.951975 | orchestrator | 2026-03-19 01:00:19 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:19.951980 | orchestrator | 2026-03-19 01:00:19 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 01:00:19.951985 | orchestrator | 2026-03-19 01:00:19 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:19.951989 | orchestrator | 2026-03-19 01:00:19 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:19.952013 | orchestrator | 2026-03-19 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:22.907035 | orchestrator | 2026-03-19 01:00:22 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:22.917901 | orchestrator | 2026-03-19 01:00:22 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:22.927127 | orchestrator | 2026-03-19 01:00:22 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 01:00:22.933134 | orchestrator | 2026-03-19 01:00:22 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:22.936057 | orchestrator | 2026-03-19 01:00:22 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:22.936100 | orchestrator | 2026-03-19 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:25.977933 | orchestrator | 2026-03-19 01:00:25 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:25.978050 | orchestrator | 2026-03-19 01:00:25 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:25.978069 | orchestrator | 2026-03-19 01:00:25 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state STARTED 2026-03-19 01:00:25.978451 | orchestrator | 2026-03-19 01:00:25 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:25.979190 | orchestrator | 2026-03-19 01:00:25 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:25.979213 | orchestrator | 2026-03-19 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:29.008335 | orchestrator | 2026-03-19 01:00:29 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:29.008438 | orchestrator | 2026-03-19 01:00:29 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:29.009554 | orchestrator | 2026-03-19 01:00:29 | INFO  | Task c783f6b6-9535-4068-81d0-ef68ad4f2fba is in state SUCCESS 2026-03-19 01:00:29.009713 | orchestrator | 2026-03-19 01:00:29.009722 | orchestrator | 2026-03-19 01:00:29.009730 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-19 01:00:29.009738 | orchestrator | 2026-03-19 01:00:29.009746 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-19 01:00:29.009753 | orchestrator | Thursday 19 March 2026 00:58:54 +0000 (0:00:00.233) 0:00:00.233 ******** 2026-03-19 01:00:29.009761 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-19 01:00:29.009770 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-19 01:00:29.009777 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-19 01:00:29.009797 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-19 01:00:29.009812 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-19 01:00:29.009819 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-19 01:00:29.009825 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-19 01:00:29.009831 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-19 01:00:29.009837 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-19 01:00:29.009842 | orchestrator | 2026-03-19 01:00:29.009849 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-19 01:00:29.009855 | orchestrator | Thursday 19 March 2026 00:58:58 +0000 (0:00:04.348) 0:00:04.581 ******** 2026-03-19 01:00:29.009861 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-19 01:00:29.009867 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-19 01:00:29.009873 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-19 01:00:29.009880 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-19 01:00:29.009920 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-19 01:00:29.009926 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-19 01:00:29.009931 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-19 01:00:29.009937 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-19 01:00:29.009960 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-19 01:00:29.009966 | orchestrator | 2026-03-19 01:00:29.009970 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-19 01:00:29.009974 | orchestrator | Thursday 19 March 2026 00:59:02 +0000 (0:00:03.689) 0:00:08.271 ******** 2026-03-19 01:00:29.009978 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-19 01:00:29.009982 | orchestrator | 2026-03-19 01:00:29.009986 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-19 01:00:29.009990 | orchestrator | Thursday 19 March 2026 00:59:03 +0000 (0:00:01.015) 0:00:09.286 ******** 2026-03-19 01:00:29.009994 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-19 01:00:29.009998 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-19 01:00:29.010001 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-19 01:00:29.010006 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-19 01:00:29.010010 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-19 01:00:29.010067 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-19 01:00:29.010076 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-19 01:00:29.010080 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-19 01:00:29.010084 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-19 01:00:29.010088 | orchestrator | 2026-03-19 01:00:29.010092 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-19 01:00:29.010096 | orchestrator | Thursday 19 March 2026 00:59:17 +0000 (0:00:13.787) 0:00:23.074 ******** 2026-03-19 01:00:29.010099 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-19 01:00:29.010104 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-19 01:00:29.010108 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-19 01:00:29.010112 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-19 01:00:29.010128 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-19 01:00:29.010132 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-19 01:00:29.010136 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-19 01:00:29.010140 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-19 01:00:29.010143 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-19 01:00:29.010147 | orchestrator | 2026-03-19 01:00:29.010171 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-19 01:00:29.010177 | orchestrator | Thursday 19 March 2026 00:59:21 +0000 (0:00:04.390) 0:00:27.464 ******** 2026-03-19 01:00:29.010189 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-19 01:00:29.010193 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-19 01:00:29.010197 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-19 01:00:29.010201 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-19 01:00:29.010205 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-19 01:00:29.010209 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-19 01:00:29.010212 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-19 01:00:29.010216 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-19 01:00:29.010220 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-19 01:00:29.010224 | orchestrator | 2026-03-19 01:00:29.010228 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:00:29.010232 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:00:29.010237 | orchestrator | 2026-03-19 01:00:29.010241 | orchestrator | 2026-03-19 01:00:29.010245 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:00:29.010248 | orchestrator | Thursday 19 March 2026 00:59:28 +0000 (0:00:07.132) 0:00:34.597 ******** 2026-03-19 01:00:29.010252 | orchestrator | =============================================================================== 2026-03-19 01:00:29.010256 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.79s 2026-03-19 01:00:29.010260 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.13s 2026-03-19 01:00:29.010263 | orchestrator | Check if target directories exist --------------------------------------- 4.39s 2026-03-19 01:00:29.010267 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.35s 2026-03-19 01:00:29.010271 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.69s 2026-03-19 01:00:29.010279 | orchestrator | Create share directory -------------------------------------------------- 1.02s 2026-03-19 01:00:29.010284 | orchestrator | 2026-03-19 01:00:29.010288 | orchestrator | 2026-03-19 01:00:29.010293 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-19 01:00:29.010297 | orchestrator | 2026-03-19 01:00:29.010301 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-19 01:00:29.010306 | orchestrator | Thursday 19 March 2026 00:59:31 +0000 (0:00:00.230) 0:00:00.230 ******** 2026-03-19 01:00:29.010310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-19 01:00:29.010316 | orchestrator | 2026-03-19 01:00:29.010320 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-19 01:00:29.010325 | orchestrator | Thursday 19 March 2026 00:59:31 +0000 (0:00:00.181) 0:00:00.412 ******** 2026-03-19 01:00:29.010329 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-19 01:00:29.010334 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-19 01:00:29.010339 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-19 01:00:29.010344 | orchestrator | 2026-03-19 01:00:29.010348 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-19 01:00:29.010352 | orchestrator | Thursday 19 March 2026 00:59:33 +0000 (0:00:01.385) 0:00:01.797 ******** 2026-03-19 01:00:29.010357 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-19 01:00:29.010361 | orchestrator | 2026-03-19 01:00:29.010366 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-19 01:00:29.010370 | orchestrator | Thursday 19 March 2026 00:59:34 +0000 (0:00:01.022) 0:00:02.819 ******** 2026-03-19 01:00:29.010379 | orchestrator | changed: [testbed-manager] 2026-03-19 01:00:29.010384 | orchestrator | 2026-03-19 01:00:29.010388 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-19 01:00:29.010393 | orchestrator | Thursday 19 March 2026 00:59:34 +0000 (0:00:00.762) 0:00:03.581 ******** 2026-03-19 01:00:29.010397 | orchestrator | changed: [testbed-manager] 2026-03-19 01:00:29.010401 | orchestrator | 2026-03-19 01:00:29.010406 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-19 01:00:29.010410 | orchestrator | Thursday 19 March 2026 00:59:35 +0000 (0:00:00.816) 0:00:04.398 ******** 2026-03-19 01:00:29.010415 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-19 01:00:29.010419 | orchestrator | ok: [testbed-manager] 2026-03-19 01:00:29.010424 | orchestrator | 2026-03-19 01:00:29.010429 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-19 01:00:29.010436 | orchestrator | Thursday 19 March 2026 01:00:15 +0000 (0:00:39.937) 0:00:44.336 ******** 2026-03-19 01:00:29.010441 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-19 01:00:29.010446 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-19 01:00:29.010450 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-19 01:00:29.010455 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-19 01:00:29.010460 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-19 01:00:29.010466 | orchestrator | 2026-03-19 01:00:29.010472 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-19 01:00:29.010477 | orchestrator | Thursday 19 March 2026 01:00:20 +0000 (0:00:04.635) 0:00:48.972 ******** 2026-03-19 01:00:29.010485 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-19 01:00:29.010494 | orchestrator | 2026-03-19 01:00:29.010504 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-19 01:00:29.010510 | orchestrator | Thursday 19 March 2026 01:00:21 +0000 (0:00:00.956) 0:00:49.928 ******** 2026-03-19 01:00:29.010515 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:00:29.010521 | orchestrator | 2026-03-19 01:00:29.010527 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-19 01:00:29.010533 | orchestrator | Thursday 19 March 2026 01:00:21 +0000 (0:00:00.113) 0:00:50.042 ******** 2026-03-19 01:00:29.010539 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:00:29.010545 | orchestrator | 2026-03-19 01:00:29.010551 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-19 01:00:29.010557 | orchestrator | Thursday 19 March 2026 01:00:21 +0000 (0:00:00.273) 0:00:50.315 ******** 2026-03-19 01:00:29.010562 | orchestrator | changed: [testbed-manager] 2026-03-19 01:00:29.010569 | orchestrator | 2026-03-19 01:00:29.010575 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-19 01:00:29.010581 | orchestrator | Thursday 19 March 2026 01:00:23 +0000 (0:00:01.442) 0:00:51.757 ******** 2026-03-19 01:00:29.010587 | orchestrator | changed: [testbed-manager] 2026-03-19 01:00:29.010593 | orchestrator | 2026-03-19 01:00:29.010600 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-19 01:00:29.010606 | orchestrator | Thursday 19 March 2026 01:00:24 +0000 (0:00:00.956) 0:00:52.714 ******** 2026-03-19 01:00:29.010613 | orchestrator | changed: [testbed-manager] 2026-03-19 01:00:29.010619 | orchestrator | 2026-03-19 01:00:29.010626 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-19 01:00:29.010632 | orchestrator | Thursday 19 March 2026 01:00:24 +0000 (0:00:00.662) 0:00:53.376 ******** 2026-03-19 01:00:29.010640 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-19 01:00:29.010645 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-19 01:00:29.010650 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-19 01:00:29.010654 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-19 01:00:29.010659 | orchestrator | 2026-03-19 01:00:29.010663 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:00:29.010672 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:00:29.010786 | orchestrator | 2026-03-19 01:00:29.010797 | orchestrator | 2026-03-19 01:00:29.010809 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:00:29.010815 | orchestrator | Thursday 19 March 2026 01:00:26 +0000 (0:00:01.410) 0:00:54.787 ******** 2026-03-19 01:00:29.010821 | orchestrator | =============================================================================== 2026-03-19 01:00:29.010828 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.94s 2026-03-19 01:00:29.010832 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.64s 2026-03-19 01:00:29.010836 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.44s 2026-03-19 01:00:29.010839 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.41s 2026-03-19 01:00:29.010843 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.39s 2026-03-19 01:00:29.010847 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.02s 2026-03-19 01:00:29.010850 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.96s 2026-03-19 01:00:29.010854 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.96s 2026-03-19 01:00:29.010858 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.82s 2026-03-19 01:00:29.010862 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.76s 2026-03-19 01:00:29.010865 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.66s 2026-03-19 01:00:29.010869 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.27s 2026-03-19 01:00:29.010873 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.18s 2026-03-19 01:00:29.010877 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.11s 2026-03-19 01:00:29.010884 | orchestrator | 2026-03-19 01:00:29 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:29.011337 | orchestrator | 2026-03-19 01:00:29 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:29.011891 | orchestrator | 2026-03-19 01:00:29 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:00:29.011983 | orchestrator | 2026-03-19 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:32.058853 | orchestrator | 2026-03-19 01:00:32 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:32.059012 | orchestrator | 2026-03-19 01:00:32 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:32.059647 | orchestrator | 2026-03-19 01:00:32 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:32.061080 | orchestrator | 2026-03-19 01:00:32 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:32.061597 | orchestrator | 2026-03-19 01:00:32 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:00:32.061652 | orchestrator | 2026-03-19 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:35.121393 | orchestrator | 2026-03-19 01:00:35 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:35.121457 | orchestrator | 2026-03-19 01:00:35 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:35.121465 | orchestrator | 2026-03-19 01:00:35 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:35.121471 | orchestrator | 2026-03-19 01:00:35 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:35.121494 | orchestrator | 2026-03-19 01:00:35 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:00:35.121500 | orchestrator | 2026-03-19 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:38.160100 | orchestrator | 2026-03-19 01:00:38 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:38.160477 | orchestrator | 2026-03-19 01:00:38 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:38.162761 | orchestrator | 2026-03-19 01:00:38 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:38.163279 | orchestrator | 2026-03-19 01:00:38 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:38.163855 | orchestrator | 2026-03-19 01:00:38 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:00:38.163876 | orchestrator | 2026-03-19 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:41.712634 | orchestrator | 2026-03-19 01:00:41 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:41.712878 | orchestrator | 2026-03-19 01:00:41 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:41.713521 | orchestrator | 2026-03-19 01:00:41 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:41.714192 | orchestrator | 2026-03-19 01:00:41 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:41.714808 | orchestrator | 2026-03-19 01:00:41 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:00:41.714834 | orchestrator | 2026-03-19 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:44.739640 | orchestrator | 2026-03-19 01:00:44 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:44.739831 | orchestrator | 2026-03-19 01:00:44 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:44.740647 | orchestrator | 2026-03-19 01:00:44 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:44.741224 | orchestrator | 2026-03-19 01:00:44 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:44.741961 | orchestrator | 2026-03-19 01:00:44 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:00:44.742008 | orchestrator | 2026-03-19 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:47.763594 | orchestrator | 2026-03-19 01:00:47 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:47.763689 | orchestrator | 2026-03-19 01:00:47 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:47.764321 | orchestrator | 2026-03-19 01:00:47 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:47.764841 | orchestrator | 2026-03-19 01:00:47 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:47.765480 | orchestrator | 2026-03-19 01:00:47 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:00:47.765514 | orchestrator | 2026-03-19 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:50.803640 | orchestrator | 2026-03-19 01:00:50 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:50.803982 | orchestrator | 2026-03-19 01:00:50 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:50.804756 | orchestrator | 2026-03-19 01:00:50 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:50.805300 | orchestrator | 2026-03-19 01:00:50 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:50.805925 | orchestrator | 2026-03-19 01:00:50 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:00:50.805954 | orchestrator | 2026-03-19 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:53.828444 | orchestrator | 2026-03-19 01:00:53 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:53.828901 | orchestrator | 2026-03-19 01:00:53 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:53.829836 | orchestrator | 2026-03-19 01:00:53 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:53.830990 | orchestrator | 2026-03-19 01:00:53 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:53.832152 | orchestrator | 2026-03-19 01:00:53 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:00:53.832194 | orchestrator | 2026-03-19 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:56.857189 | orchestrator | 2026-03-19 01:00:56 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:56.857659 | orchestrator | 2026-03-19 01:00:56 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:56.858440 | orchestrator | 2026-03-19 01:00:56 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:56.859162 | orchestrator | 2026-03-19 01:00:56 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:56.859814 | orchestrator | 2026-03-19 01:00:56 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:00:56.859840 | orchestrator | 2026-03-19 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:00:59.883297 | orchestrator | 2026-03-19 01:00:59 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:00:59.885086 | orchestrator | 2026-03-19 01:00:59 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:00:59.887032 | orchestrator | 2026-03-19 01:00:59 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:00:59.890370 | orchestrator | 2026-03-19 01:00:59 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:00:59.893775 | orchestrator | 2026-03-19 01:00:59 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:00:59.893978 | orchestrator | 2026-03-19 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:02.923307 | orchestrator | 2026-03-19 01:01:02 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:02.923366 | orchestrator | 2026-03-19 01:01:02 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:01:02.923377 | orchestrator | 2026-03-19 01:01:02 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:02.923384 | orchestrator | 2026-03-19 01:01:02 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:01:02.923391 | orchestrator | 2026-03-19 01:01:02 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:02.923398 | orchestrator | 2026-03-19 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:05.953955 | orchestrator | 2026-03-19 01:01:05 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:05.954182 | orchestrator | 2026-03-19 01:01:05 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:01:05.954962 | orchestrator | 2026-03-19 01:01:05 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:05.955541 | orchestrator | 2026-03-19 01:01:05 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:01:05.956335 | orchestrator | 2026-03-19 01:01:05 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:05.956382 | orchestrator | 2026-03-19 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:08.987401 | orchestrator | 2026-03-19 01:01:08 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:08.987834 | orchestrator | 2026-03-19 01:01:08 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:01:08.988591 | orchestrator | 2026-03-19 01:01:08 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:08.989361 | orchestrator | 2026-03-19 01:01:08 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:01:08.990102 | orchestrator | 2026-03-19 01:01:08 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:08.990239 | orchestrator | 2026-03-19 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:12.035823 | orchestrator | 2026-03-19 01:01:12 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:12.036502 | orchestrator | 2026-03-19 01:01:12 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state STARTED 2026-03-19 01:01:12.037281 | orchestrator | 2026-03-19 01:01:12 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:12.037900 | orchestrator | 2026-03-19 01:01:12 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:01:12.038351 | orchestrator | 2026-03-19 01:01:12 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:12.038378 | orchestrator | 2026-03-19 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:15.065397 | orchestrator | 2026-03-19 01:01:15 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:15.066669 | orchestrator | 2026-03-19 01:01:15 | INFO  | Task da8d9564-e8ea-41e6-bfb9-b735f972772a is in state SUCCESS 2026-03-19 01:01:15.067173 | orchestrator | 2026-03-19 01:01:15 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:15.067861 | orchestrator | 2026-03-19 01:01:15 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:01:15.070206 | orchestrator | 2026-03-19 01:01:15 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:15.070260 | orchestrator | 2026-03-19 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:18.097986 | orchestrator | 2026-03-19 01:01:18 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:01:18.099114 | orchestrator | 2026-03-19 01:01:18 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:18.100350 | orchestrator | 2026-03-19 01:01:18 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:18.101488 | orchestrator | 2026-03-19 01:01:18 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:01:18.102697 | orchestrator | 2026-03-19 01:01:18 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:18.102738 | orchestrator | 2026-03-19 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:21.143974 | orchestrator | 2026-03-19 01:01:21 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:01:21.144618 | orchestrator | 2026-03-19 01:01:21 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:21.145793 | orchestrator | 2026-03-19 01:01:21 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:21.147081 | orchestrator | 2026-03-19 01:01:21 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:01:21.150547 | orchestrator | 2026-03-19 01:01:21 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:21.150595 | orchestrator | 2026-03-19 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:24.172197 | orchestrator | 2026-03-19 01:01:24 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:01:24.172557 | orchestrator | 2026-03-19 01:01:24 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:24.173076 | orchestrator | 2026-03-19 01:01:24 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:24.175804 | orchestrator | 2026-03-19 01:01:24 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:01:24.176234 | orchestrator | 2026-03-19 01:01:24 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:24.176288 | orchestrator | 2026-03-19 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:27.198315 | orchestrator | 2026-03-19 01:01:27 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:01:27.198550 | orchestrator | 2026-03-19 01:01:27 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:27.199295 | orchestrator | 2026-03-19 01:01:27 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:27.200551 | orchestrator | 2026-03-19 01:01:27 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:01:27.201191 | orchestrator | 2026-03-19 01:01:27 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:27.201221 | orchestrator | 2026-03-19 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:30.229130 | orchestrator | 2026-03-19 01:01:30 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:01:30.230208 | orchestrator | 2026-03-19 01:01:30 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:30.234430 | orchestrator | 2026-03-19 01:01:30 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:30.234572 | orchestrator | 2026-03-19 01:01:30 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:01:30.237257 | orchestrator | 2026-03-19 01:01:30 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:30.237325 | orchestrator | 2026-03-19 01:01:30 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:33.260824 | orchestrator | 2026-03-19 01:01:33 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:01:33.261131 | orchestrator | 2026-03-19 01:01:33 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:33.261911 | orchestrator | 2026-03-19 01:01:33 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:33.263807 | orchestrator | 2026-03-19 01:01:33 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state STARTED 2026-03-19 01:01:33.264379 | orchestrator | 2026-03-19 01:01:33 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:33.264427 | orchestrator | 2026-03-19 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:36.288382 | orchestrator | 2026-03-19 01:01:36 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:01:36.289670 | orchestrator | 2026-03-19 01:01:36 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:36.292183 | orchestrator | 2026-03-19 01:01:36 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:36.292978 | orchestrator | 2026-03-19 01:01:36 | INFO  | Task 7c6d1840-6dcc-4613-829c-f479903e444c is in state SUCCESS 2026-03-19 01:01:36.294165 | orchestrator | 2026-03-19 01:01:36.294196 | orchestrator | 2026-03-19 01:01:36.294201 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-19 01:01:36.294205 | orchestrator | 2026-03-19 01:01:36.294208 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-19 01:01:36.294212 | orchestrator | Thursday 19 March 2026 00:59:28 +0000 (0:00:00.111) 0:00:00.111 ******** 2026-03-19 01:01:36.294215 | orchestrator | changed: [localhost] 2026-03-19 01:01:36.294219 | orchestrator | 2026-03-19 01:01:36.294222 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-19 01:01:36.294225 | orchestrator | Thursday 19 March 2026 00:59:29 +0000 (0:00:01.317) 0:00:01.428 ******** 2026-03-19 01:01:36.294228 | orchestrator | changed: [localhost] 2026-03-19 01:01:36.294231 | orchestrator | 2026-03-19 01:01:36.294234 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-19 01:01:36.294237 | orchestrator | Thursday 19 March 2026 01:00:21 +0000 (0:00:52.211) 0:00:53.639 ******** 2026-03-19 01:01:36.294240 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-03-19 01:01:36.294244 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2026-03-19 01:01:36.294247 | orchestrator | changed: [localhost] 2026-03-19 01:01:36.294250 | orchestrator | 2026-03-19 01:01:36.294253 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:01:36.294256 | orchestrator | 2026-03-19 01:01:36.294259 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 01:01:36.294262 | orchestrator | Thursday 19 March 2026 01:01:12 +0000 (0:00:51.029) 0:01:44.668 ******** 2026-03-19 01:01:36.294265 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:01:36.294268 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:01:36.294271 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:01:36.294274 | orchestrator | 2026-03-19 01:01:36.294280 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 01:01:36.294287 | orchestrator | Thursday 19 March 2026 01:01:13 +0000 (0:00:00.607) 0:01:45.275 ******** 2026-03-19 01:01:36.294294 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-19 01:01:36.294299 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-19 01:01:36.294304 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-19 01:01:36.294310 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-19 01:01:36.294339 | orchestrator | 2026-03-19 01:01:36.294344 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-19 01:01:36.294348 | orchestrator | skipping: no hosts matched 2026-03-19 01:01:36.294354 | orchestrator | 2026-03-19 01:01:36.294359 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:01:36.294365 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:01:36.294407 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:01:36.294414 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:01:36.294419 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:01:36.294424 | orchestrator | 2026-03-19 01:01:36.294464 | orchestrator | 2026-03-19 01:01:36.294468 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:01:36.294471 | orchestrator | Thursday 19 March 2026 01:01:14 +0000 (0:00:00.606) 0:01:45.882 ******** 2026-03-19 01:01:36.294474 | orchestrator | =============================================================================== 2026-03-19 01:01:36.294477 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 52.21s 2026-03-19 01:01:36.294480 | orchestrator | Download ironic-agent kernel ------------------------------------------- 51.03s 2026-03-19 01:01:36.294483 | orchestrator | Ensure the destination directory exists --------------------------------- 1.32s 2026-03-19 01:01:36.294487 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.61s 2026-03-19 01:01:36.294490 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-03-19 01:01:36.294493 | orchestrator | 2026-03-19 01:01:36.294496 | orchestrator | 2026-03-19 01:01:36.294499 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:01:36.294502 | orchestrator | 2026-03-19 01:01:36.294505 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 01:01:36.294508 | orchestrator | Thursday 19 March 2026 00:59:28 +0000 (0:00:00.497) 0:00:00.497 ******** 2026-03-19 01:01:36.294511 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:01:36.294514 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:01:36.294517 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:01:36.294520 | orchestrator | 2026-03-19 01:01:36.294523 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 01:01:36.294526 | orchestrator | Thursday 19 March 2026 00:59:28 +0000 (0:00:00.470) 0:00:00.968 ******** 2026-03-19 01:01:36.294530 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-19 01:01:36.294540 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-19 01:01:36.294543 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-19 01:01:36.294546 | orchestrator | 2026-03-19 01:01:36.294549 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-19 01:01:36.294552 | orchestrator | 2026-03-19 01:01:36.294555 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-19 01:01:36.294558 | orchestrator | Thursday 19 March 2026 00:59:29 +0000 (0:00:00.384) 0:00:01.353 ******** 2026-03-19 01:01:36.294568 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:01:36.294572 | orchestrator | 2026-03-19 01:01:36.294575 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-19 01:01:36.294578 | orchestrator | Thursday 19 March 2026 00:59:30 +0000 (0:00:00.853) 0:00:02.206 ******** 2026-03-19 01:01:36.294581 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-19 01:01:36.294584 | orchestrator | 2026-03-19 01:01:36.294588 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-19 01:01:36.294591 | orchestrator | Thursday 19 March 2026 00:59:34 +0000 (0:00:04.358) 0:00:06.565 ******** 2026-03-19 01:01:36.294594 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-19 01:01:36.294598 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-19 01:01:36.294604 | orchestrator | 2026-03-19 01:01:36.294608 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-19 01:01:36.294611 | orchestrator | Thursday 19 March 2026 00:59:42 +0000 (0:00:08.428) 0:00:14.994 ******** 2026-03-19 01:01:36.294614 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 01:01:36.294617 | orchestrator | 2026-03-19 01:01:36.294620 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-19 01:01:36.294623 | orchestrator | Thursday 19 March 2026 00:59:46 +0000 (0:00:03.722) 0:00:18.717 ******** 2026-03-19 01:01:36.294626 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-19 01:01:36.294629 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 01:01:36.294632 | orchestrator | 2026-03-19 01:01:36.294635 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-19 01:01:36.294639 | orchestrator | Thursday 19 March 2026 00:59:50 +0000 (0:00:03.658) 0:00:22.375 ******** 2026-03-19 01:01:36.294642 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 01:01:36.294645 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-19 01:01:36.294648 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-19 01:01:36.294651 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-19 01:01:36.294654 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-19 01:01:36.294657 | orchestrator | 2026-03-19 01:01:36.294660 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-19 01:01:36.294663 | orchestrator | Thursday 19 March 2026 01:00:07 +0000 (0:00:17.143) 0:00:39.519 ******** 2026-03-19 01:01:36.294667 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-19 01:01:36.294670 | orchestrator | 2026-03-19 01:01:36.294673 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-19 01:01:36.294676 | orchestrator | Thursday 19 March 2026 01:00:11 +0000 (0:00:03.678) 0:00:43.198 ******** 2026-03-19 01:01:36.294681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 01:01:36.294688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 01:01:36.294694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 01:01:36.294701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.294705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.294708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.294712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.294718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.294723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.294728 | orchestrator | 2026-03-19 01:01:36.294732 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-19 01:01:36.294735 | orchestrator | Thursday 19 March 2026 01:00:14 +0000 (0:00:02.988) 0:00:46.186 ******** 2026-03-19 01:01:36.294738 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-19 01:01:36.294741 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-19 01:01:36.294744 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-19 01:01:36.294747 | orchestrator | 2026-03-19 01:01:36.294750 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-19 01:01:36.294753 | orchestrator | Thursday 19 March 2026 01:00:15 +0000 (0:00:01.828) 0:00:48.014 ******** 2026-03-19 01:01:36.294756 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:01:36.294759 | orchestrator | 2026-03-19 01:01:36.294763 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-19 01:01:36.294767 | orchestrator | Thursday 19 March 2026 01:00:16 +0000 (0:00:00.107) 0:00:48.121 ******** 2026-03-19 01:01:36.294772 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:01:36.294785 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:01:36.294796 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:01:36.294801 | orchestrator | 2026-03-19 01:01:36.294806 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-19 01:01:36.294811 | orchestrator | Thursday 19 March 2026 01:00:16 +0000 (0:00:00.330) 0:00:48.452 ******** 2026-03-19 01:01:36.294815 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:01:36.294835 | orchestrator | 2026-03-19 01:01:36.294840 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-19 01:01:36.294846 | orchestrator | Thursday 19 March 2026 01:00:17 +0000 (0:00:00.864) 0:00:49.316 ******** 2026-03-19 01:01:36.294853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 01:01:36.294859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 01:01:36.294877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 01:01:36.294882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.294886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.294890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.294894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.294898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.294906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.294911 | orchestrator | 2026-03-19 01:01:36.294920 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-19 01:01:36.294926 | orchestrator | Thursday 19 March 2026 01:00:22 +0000 (0:00:05.192) 0:00:54.508 ******** 2026-03-19 01:01:36.294930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 01:01:36.294934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.294938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.294942 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:01:36.294946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 01:01:36.294954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.294960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.294964 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:01:36.294968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 01:01:36.294972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.294975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.294984 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:01:36.294988 | orchestrator | 2026-03-19 01:01:36.294991 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-19 01:01:36.294994 | orchestrator | Thursday 19 March 2026 01:00:23 +0000 (0:00:00.873) 0:00:55.382 ******** 2026-03-19 01:01:36.294997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 01:01:36.295005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.295008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.295012 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:01:36.295015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 01:01:36.295018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.295033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.295036 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:01:36.295041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 01:01:36.295192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.295199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.295202 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:01:36.295206 | orchestrator | 2026-03-19 01:01:36.295209 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-19 01:01:36.295212 | orchestrator | Thursday 19 March 2026 01:00:25 +0000 (0:00:02.090) 0:00:57.473 ******** 2026-03-19 01:01:36.295216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 01:01:36.295222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 01:01:36.295228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 01:01:36.295234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295258 | orchestrator | 2026-03-19 01:01:36.295261 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-19 01:01:36.295264 | orchestrator | Thursday 19 March 2026 01:00:29 +0000 (0:00:04.465) 0:01:01.938 ******** 2026-03-19 01:01:36.295267 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:01:36.295270 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:01:36.295273 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:01:36.295277 | orchestrator | 2026-03-19 01:01:36.295280 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-19 01:01:36.295284 | orchestrator | Thursday 19 March 2026 01:00:31 +0000 (0:00:01.780) 0:01:03.718 ******** 2026-03-19 01:01:36.295287 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 01:01:36.295290 | orchestrator | 2026-03-19 01:01:36.295293 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-19 01:01:36.295296 | orchestrator | Thursday 19 March 2026 01:00:33 +0000 (0:00:01.729) 0:01:05.448 ******** 2026-03-19 01:01:36.295300 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:01:36.295303 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:01:36.295306 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:01:36.295309 | orchestrator | 2026-03-19 01:01:36.295312 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-19 01:01:36.295315 | orchestrator | Thursday 19 March 2026 01:00:34 +0000 (0:00:00.726) 0:01:06.174 ******** 2026-03-19 01:01:36.295318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 01:01:36.295324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 01:01:36.295327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 01:01:36.295332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295357 | orchestrator | 2026-03-19 01:01:36.295360 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-19 01:01:36.295363 | orchestrator | Thursday 19 March 2026 01:00:43 +0000 (0:00:09.485) 0:01:15.660 ******** 2026-03-19 01:01:36.295368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 01:01:36.295373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.295376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.295382 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:01:36.295385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 01:01:36.295388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.295392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.295395 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:01:36.295401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 01:01:36.295404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.295409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:01:36.295413 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:01:36.295416 | orchestrator | 2026-03-19 01:01:36.295419 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-19 01:01:36.295422 | orchestrator | Thursday 19 March 2026 01:00:45 +0000 (0:00:01.643) 0:01:17.304 ******** 2026-03-19 01:01:36.295425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 01:01:36.295470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 01:01:36.295483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 01:01:36.295490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:01:36.295512 | orchestrator | 2026-03-19 01:01:36.295516 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-19 01:01:36.295522 | orchestrator | Thursday 19 March 2026 01:00:48 +0000 (0:00:03.685) 0:01:20.989 ******** 2026-03-19 01:01:36.295537 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:01:36.295540 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:01:36.295543 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:01:36.295547 | orchestrator | 2026-03-19 01:01:36.295550 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-19 01:01:36.295553 | orchestrator | Thursday 19 March 2026 01:00:49 +0000 (0:00:00.333) 0:01:21.322 ******** 2026-03-19 01:01:36.295556 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:01:36.295559 | orchestrator | 2026-03-19 01:01:36.295562 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-19 01:01:36.295565 | orchestrator | Thursday 19 March 2026 01:00:51 +0000 (0:00:02.466) 0:01:23.788 ******** 2026-03-19 01:01:36.295568 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:01:36.295571 | orchestrator | 2026-03-19 01:01:36.295574 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-19 01:01:36.295578 | orchestrator | Thursday 19 March 2026 01:00:54 +0000 (0:00:02.840) 0:01:26.629 ******** 2026-03-19 01:01:36.295581 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:01:36.295584 | orchestrator | 2026-03-19 01:01:36.295587 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-19 01:01:36.295590 | orchestrator | Thursday 19 March 2026 01:01:04 +0000 (0:00:10.173) 0:01:36.803 ******** 2026-03-19 01:01:36.295593 | orchestrator | 2026-03-19 01:01:36.295596 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-19 01:01:36.295599 | orchestrator | Thursday 19 March 2026 01:01:05 +0000 (0:00:00.369) 0:01:37.172 ******** 2026-03-19 01:01:36.295602 | orchestrator | 2026-03-19 01:01:36.295605 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-19 01:01:36.295608 | orchestrator | Thursday 19 March 2026 01:01:05 +0000 (0:00:00.124) 0:01:37.297 ******** 2026-03-19 01:01:36.295612 | orchestrator | 2026-03-19 01:01:36.295615 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-19 01:01:36.295618 | orchestrator | Thursday 19 March 2026 01:01:05 +0000 (0:00:00.093) 0:01:37.391 ******** 2026-03-19 01:01:36.295621 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:01:36.295624 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:01:36.295627 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:01:36.295630 | orchestrator | 2026-03-19 01:01:36.295633 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-19 01:01:36.295636 | orchestrator | Thursday 19 March 2026 01:01:11 +0000 (0:00:06.252) 0:01:43.643 ******** 2026-03-19 01:01:36.295639 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:01:36.295643 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:01:36.295646 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:01:36.295649 | orchestrator | 2026-03-19 01:01:36.295653 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-19 01:01:36.295656 | orchestrator | Thursday 19 March 2026 01:01:21 +0000 (0:00:10.154) 0:01:53.798 ******** 2026-03-19 01:01:36.295659 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:01:36.295662 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:01:36.295665 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:01:36.295668 | orchestrator | 2026-03-19 01:01:36.295671 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:01:36.295675 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 01:01:36.295678 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 01:01:36.295681 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 01:01:36.295687 | orchestrator | 2026-03-19 01:01:36.295690 | orchestrator | 2026-03-19 01:01:36.295693 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:01:36.295696 | orchestrator | Thursday 19 March 2026 01:01:33 +0000 (0:00:11.429) 0:02:05.227 ******** 2026-03-19 01:01:36.295699 | orchestrator | =============================================================================== 2026-03-19 01:01:36.295702 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.14s 2026-03-19 01:01:36.295705 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.43s 2026-03-19 01:01:36.295708 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.17s 2026-03-19 01:01:36.295711 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.15s 2026-03-19 01:01:36.295715 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.49s 2026-03-19 01:01:36.295718 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 8.43s 2026-03-19 01:01:36.295721 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.25s 2026-03-19 01:01:36.295724 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 5.19s 2026-03-19 01:01:36.295729 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.47s 2026-03-19 01:01:36.295732 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.36s 2026-03-19 01:01:36.295735 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.72s 2026-03-19 01:01:36.295738 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.69s 2026-03-19 01:01:36.295741 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.68s 2026-03-19 01:01:36.295744 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.66s 2026-03-19 01:01:36.295750 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.99s 2026-03-19 01:01:36.295753 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.84s 2026-03-19 01:01:36.295756 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.47s 2026-03-19 01:01:36.295759 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.09s 2026-03-19 01:01:36.295762 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.83s 2026-03-19 01:01:36.295766 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.78s 2026-03-19 01:01:36.295769 | orchestrator | 2026-03-19 01:01:36 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:01:36.295772 | orchestrator | 2026-03-19 01:01:36 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:36.295775 | orchestrator | 2026-03-19 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:39.321851 | orchestrator | 2026-03-19 01:01:39 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:01:39.324707 | orchestrator | 2026-03-19 01:01:39 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:39.327633 | orchestrator | 2026-03-19 01:01:39 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:39.328935 | orchestrator | 2026-03-19 01:01:39 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:01:39.329497 | orchestrator | 2026-03-19 01:01:39 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:39.329514 | orchestrator | 2026-03-19 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:42.365428 | orchestrator | 2026-03-19 01:01:42 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:01:42.367341 | orchestrator | 2026-03-19 01:01:42 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:42.368964 | orchestrator | 2026-03-19 01:01:42 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:42.370605 | orchestrator | 2026-03-19 01:01:42 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:01:42.372376 | orchestrator | 2026-03-19 01:01:42 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:42.372411 | orchestrator | 2026-03-19 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:45.399803 | orchestrator | 2026-03-19 01:01:45 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:01:45.400656 | orchestrator | 2026-03-19 01:01:45 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:45.401158 | orchestrator | 2026-03-19 01:01:45 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:45.402905 | orchestrator | 2026-03-19 01:01:45 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:01:45.402971 | orchestrator | 2026-03-19 01:01:45 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:45.402979 | orchestrator | 2026-03-19 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:48.441382 | orchestrator | 2026-03-19 01:01:48 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:01:48.442151 | orchestrator | 2026-03-19 01:01:48 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:48.443352 | orchestrator | 2026-03-19 01:01:48 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:48.444590 | orchestrator | 2026-03-19 01:01:48 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:01:48.445764 | orchestrator | 2026-03-19 01:01:48 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:48.445800 | orchestrator | 2026-03-19 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:51.468098 | orchestrator | 2026-03-19 01:01:51 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:01:51.468725 | orchestrator | 2026-03-19 01:01:51 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:51.470308 | orchestrator | 2026-03-19 01:01:51 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:51.470357 | orchestrator | 2026-03-19 01:01:51 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:01:51.470941 | orchestrator | 2026-03-19 01:01:51 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:51.470975 | orchestrator | 2026-03-19 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:54.508912 | orchestrator | 2026-03-19 01:01:54 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:01:54.509632 | orchestrator | 2026-03-19 01:01:54 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:54.510805 | orchestrator | 2026-03-19 01:01:54 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:54.512086 | orchestrator | 2026-03-19 01:01:54 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:01:54.513573 | orchestrator | 2026-03-19 01:01:54 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:54.513617 | orchestrator | 2026-03-19 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:01:57.547903 | orchestrator | 2026-03-19 01:01:57 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:01:57.548789 | orchestrator | 2026-03-19 01:01:57 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:01:57.549899 | orchestrator | 2026-03-19 01:01:57 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:01:57.550826 | orchestrator | 2026-03-19 01:01:57 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:01:57.551957 | orchestrator | 2026-03-19 01:01:57 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:01:57.552009 | orchestrator | 2026-03-19 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:00.590225 | orchestrator | 2026-03-19 01:02:00 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:02:00.590273 | orchestrator | 2026-03-19 01:02:00 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:02:00.590829 | orchestrator | 2026-03-19 01:02:00 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:00.591730 | orchestrator | 2026-03-19 01:02:00 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:00.592623 | orchestrator | 2026-03-19 01:02:00 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:02:00.592656 | orchestrator | 2026-03-19 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:03.635328 | orchestrator | 2026-03-19 01:02:03 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:02:03.635393 | orchestrator | 2026-03-19 01:02:03 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:02:03.638269 | orchestrator | 2026-03-19 01:02:03 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:03.638330 | orchestrator | 2026-03-19 01:02:03 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:03.640831 | orchestrator | 2026-03-19 01:02:03 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state STARTED 2026-03-19 01:02:03.641038 | orchestrator | 2026-03-19 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:06.686482 | orchestrator | 2026-03-19 01:02:06 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:02:06.688605 | orchestrator | 2026-03-19 01:02:06 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:02:06.690625 | orchestrator | 2026-03-19 01:02:06 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:06.692247 | orchestrator | 2026-03-19 01:02:06 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:06.693373 | orchestrator | 2026-03-19 01:02:06 | INFO  | Task 4d7bfd1f-abe7-4186-95ea-a96f2ad933c1 is in state SUCCESS 2026-03-19 01:02:06.694721 | orchestrator | 2026-03-19 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:09.734516 | orchestrator | 2026-03-19 01:02:09 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:02:09.736190 | orchestrator | 2026-03-19 01:02:09 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:02:09.736725 | orchestrator | 2026-03-19 01:02:09 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:09.737181 | orchestrator | 2026-03-19 01:02:09 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:09.737218 | orchestrator | 2026-03-19 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:12.779732 | orchestrator | 2026-03-19 01:02:12 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:02:12.781337 | orchestrator | 2026-03-19 01:02:12 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:02:12.782694 | orchestrator | 2026-03-19 01:02:12 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:12.784043 | orchestrator | 2026-03-19 01:02:12 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:12.784179 | orchestrator | 2026-03-19 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:15.821258 | orchestrator | 2026-03-19 01:02:15 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:02:15.821913 | orchestrator | 2026-03-19 01:02:15 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:02:15.822746 | orchestrator | 2026-03-19 01:02:15 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:15.823404 | orchestrator | 2026-03-19 01:02:15 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:15.823651 | orchestrator | 2026-03-19 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:18.851512 | orchestrator | 2026-03-19 01:02:18 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state STARTED 2026-03-19 01:02:18.851578 | orchestrator | 2026-03-19 01:02:18 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state STARTED 2026-03-19 01:02:18.851584 | orchestrator | 2026-03-19 01:02:18 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:18.852137 | orchestrator | 2026-03-19 01:02:18 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:18.852194 | orchestrator | 2026-03-19 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:21.884255 | orchestrator | 2026-03-19 01:02:21.884313 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-19 01:02:21.884321 | orchestrator | 2.16.14 2026-03-19 01:02:21.884328 | orchestrator | 2026-03-19 01:02:21.884334 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-19 01:02:21.884340 | orchestrator | 2026-03-19 01:02:21.884346 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-19 01:02:21.884351 | orchestrator | Thursday 19 March 2026 01:00:29 +0000 (0:00:00.172) 0:00:00.172 ******** 2026-03-19 01:02:21.884357 | orchestrator | changed: [testbed-manager] 2026-03-19 01:02:21.884363 | orchestrator | 2026-03-19 01:02:21.884369 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-19 01:02:21.884374 | orchestrator | Thursday 19 March 2026 01:00:31 +0000 (0:00:01.926) 0:00:02.099 ******** 2026-03-19 01:02:21.884380 | orchestrator | changed: [testbed-manager] 2026-03-19 01:02:21.884386 | orchestrator | 2026-03-19 01:02:21.884392 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-19 01:02:21.884398 | orchestrator | Thursday 19 March 2026 01:00:32 +0000 (0:00:00.922) 0:00:03.021 ******** 2026-03-19 01:02:21.884403 | orchestrator | changed: [testbed-manager] 2026-03-19 01:02:21.884409 | orchestrator | 2026-03-19 01:02:21.884415 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-19 01:02:21.884421 | orchestrator | Thursday 19 March 2026 01:00:33 +0000 (0:00:00.859) 0:00:03.880 ******** 2026-03-19 01:02:21.884426 | orchestrator | changed: [testbed-manager] 2026-03-19 01:02:21.884432 | orchestrator | 2026-03-19 01:02:21.884437 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-19 01:02:21.884443 | orchestrator | Thursday 19 March 2026 01:00:34 +0000 (0:00:01.021) 0:00:04.902 ******** 2026-03-19 01:02:21.884467 | orchestrator | changed: [testbed-manager] 2026-03-19 01:02:21.884473 | orchestrator | 2026-03-19 01:02:21.884478 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-19 01:02:21.884484 | orchestrator | Thursday 19 March 2026 01:00:35 +0000 (0:00:01.052) 0:00:05.954 ******** 2026-03-19 01:02:21.884490 | orchestrator | changed: [testbed-manager] 2026-03-19 01:02:21.884495 | orchestrator | 2026-03-19 01:02:21.884501 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-19 01:02:21.884506 | orchestrator | Thursday 19 March 2026 01:00:36 +0000 (0:00:00.931) 0:00:06.886 ******** 2026-03-19 01:02:21.884512 | orchestrator | changed: [testbed-manager] 2026-03-19 01:02:21.884518 | orchestrator | 2026-03-19 01:02:21.884523 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-19 01:02:21.884529 | orchestrator | Thursday 19 March 2026 01:00:38 +0000 (0:00:01.662) 0:00:08.548 ******** 2026-03-19 01:02:21.884534 | orchestrator | changed: [testbed-manager] 2026-03-19 01:02:21.884540 | orchestrator | 2026-03-19 01:02:21.884556 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-19 01:02:21.884561 | orchestrator | Thursday 19 March 2026 01:00:39 +0000 (0:00:01.201) 0:00:09.749 ******** 2026-03-19 01:02:21.884566 | orchestrator | changed: [testbed-manager] 2026-03-19 01:02:21.884570 | orchestrator | 2026-03-19 01:02:21.884576 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-19 01:02:21.884580 | orchestrator | Thursday 19 March 2026 01:01:38 +0000 (0:00:59.453) 0:01:09.203 ******** 2026-03-19 01:02:21.884585 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:02:21.884589 | orchestrator | 2026-03-19 01:02:21.884594 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-19 01:02:21.884599 | orchestrator | 2026-03-19 01:02:21.884603 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-19 01:02:21.884608 | orchestrator | Thursday 19 March 2026 01:01:38 +0000 (0:00:00.106) 0:01:09.310 ******** 2026-03-19 01:02:21.884613 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:02:21.884617 | orchestrator | 2026-03-19 01:02:21.884622 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-19 01:02:21.884627 | orchestrator | 2026-03-19 01:02:21.884632 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-19 01:02:21.884647 | orchestrator | Thursday 19 March 2026 01:01:50 +0000 (0:00:11.597) 0:01:20.907 ******** 2026-03-19 01:02:21.884652 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:02:21.884658 | orchestrator | 2026-03-19 01:02:21.884662 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-19 01:02:21.884667 | orchestrator | 2026-03-19 01:02:21.884672 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-19 01:02:21.884677 | orchestrator | Thursday 19 March 2026 01:01:51 +0000 (0:00:01.280) 0:01:22.188 ******** 2026-03-19 01:02:21.884683 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:02:21.884688 | orchestrator | 2026-03-19 01:02:21.884694 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:02:21.884700 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:02:21.884706 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:02:21.884712 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:02:21.884717 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:02:21.884723 | orchestrator | 2026-03-19 01:02:21.884729 | orchestrator | 2026-03-19 01:02:21.884734 | orchestrator | 2026-03-19 01:02:21.884739 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:02:21.884750 | orchestrator | Thursday 19 March 2026 01:02:03 +0000 (0:00:11.753) 0:01:33.942 ******** 2026-03-19 01:02:21.884755 | orchestrator | =============================================================================== 2026-03-19 01:02:21.884761 | orchestrator | Create admin user ------------------------------------------------------ 59.45s 2026-03-19 01:02:21.884777 | orchestrator | Restart ceph manager service ------------------------------------------- 24.63s 2026-03-19 01:02:21.884784 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.93s 2026-03-19 01:02:21.884789 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.66s 2026-03-19 01:02:21.884795 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.20s 2026-03-19 01:02:21.884801 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.05s 2026-03-19 01:02:21.884806 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.02s 2026-03-19 01:02:21.884812 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.93s 2026-03-19 01:02:21.884816 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.92s 2026-03-19 01:02:21.884822 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.86s 2026-03-19 01:02:21.884827 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.11s 2026-03-19 01:02:21.884832 | orchestrator | 2026-03-19 01:02:21.884837 | orchestrator | 2026-03-19 01:02:21.884842 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:02:21.884847 | orchestrator | 2026-03-19 01:02:21.884852 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 01:02:21.884856 | orchestrator | Thursday 19 March 2026 01:01:18 +0000 (0:00:00.296) 0:00:00.296 ******** 2026-03-19 01:02:21.884861 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:02:21.884866 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:02:21.884871 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:02:21.884876 | orchestrator | 2026-03-19 01:02:21.884898 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 01:02:21.884904 | orchestrator | Thursday 19 March 2026 01:01:18 +0000 (0:00:00.426) 0:00:00.723 ******** 2026-03-19 01:02:21.884910 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-19 01:02:21.884915 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-19 01:02:21.884921 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-19 01:02:21.884926 | orchestrator | 2026-03-19 01:02:21.884968 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-19 01:02:21.884975 | orchestrator | 2026-03-19 01:02:21.884981 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-19 01:02:21.884986 | orchestrator | Thursday 19 March 2026 01:01:19 +0000 (0:00:00.556) 0:00:01.279 ******** 2026-03-19 01:02:21.884997 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:02:21.885003 | orchestrator | 2026-03-19 01:02:21.885008 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-19 01:02:21.885014 | orchestrator | Thursday 19 March 2026 01:01:19 +0000 (0:00:00.588) 0:00:01.868 ******** 2026-03-19 01:02:21.885020 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-19 01:02:21.885025 | orchestrator | 2026-03-19 01:02:21.885031 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-19 01:02:21.885037 | orchestrator | Thursday 19 March 2026 01:01:23 +0000 (0:00:03.979) 0:00:05.847 ******** 2026-03-19 01:02:21.885042 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-19 01:02:21.885048 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-19 01:02:21.885053 | orchestrator | 2026-03-19 01:02:21.885059 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-19 01:02:21.885071 | orchestrator | Thursday 19 March 2026 01:01:30 +0000 (0:00:06.715) 0:00:12.562 ******** 2026-03-19 01:02:21.885076 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 01:02:21.885082 | orchestrator | 2026-03-19 01:02:21.885088 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-19 01:02:21.885093 | orchestrator | Thursday 19 March 2026 01:01:33 +0000 (0:00:03.159) 0:00:15.722 ******** 2026-03-19 01:02:21.885097 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-19 01:02:21.885103 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 01:02:21.885108 | orchestrator | 2026-03-19 01:02:21.885113 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-19 01:02:21.885118 | orchestrator | Thursday 19 March 2026 01:01:37 +0000 (0:00:03.502) 0:00:19.224 ******** 2026-03-19 01:02:21.885124 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 01:02:21.885129 | orchestrator | 2026-03-19 01:02:21.885135 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-19 01:02:21.885140 | orchestrator | Thursday 19 March 2026 01:01:40 +0000 (0:00:03.005) 0:00:22.230 ******** 2026-03-19 01:02:21.885146 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-19 01:02:21.885151 | orchestrator | 2026-03-19 01:02:21.885157 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-19 01:02:21.885170 | orchestrator | Thursday 19 March 2026 01:01:43 +0000 (0:00:03.677) 0:00:25.907 ******** 2026-03-19 01:02:21.885176 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:02:21.885182 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:02:21.885187 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:02:21.885193 | orchestrator | 2026-03-19 01:02:21.885198 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-19 01:02:21.885203 | orchestrator | Thursday 19 March 2026 01:01:44 +0000 (0:00:00.261) 0:00:26.169 ******** 2026-03-19 01:02:21.885220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 01:02:21.885228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 01:02:21.885237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 01:02:21.885248 | orchestrator | 2026-03-19 01:02:21.885255 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-19 01:02:21.885260 | orchestrator | Thursday 19 March 2026 01:01:46 +0000 (0:00:01.781) 0:00:27.950 ******** 2026-03-19 01:02:21.885266 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:02:21.885271 | orchestrator | 2026-03-19 01:02:21.885277 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-19 01:02:21.885282 | orchestrator | Thursday 19 March 2026 01:01:46 +0000 (0:00:00.105) 0:00:28.055 ******** 2026-03-19 01:02:21.885288 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:02:21.885294 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:02:21.885300 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:02:21.885305 | orchestrator | 2026-03-19 01:02:21.885311 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-19 01:02:21.885349 | orchestrator | Thursday 19 March 2026 01:01:46 +0000 (0:00:00.248) 0:00:28.303 ******** 2026-03-19 01:02:21.885356 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:02:21.885362 | orchestrator | 2026-03-19 01:02:21.885367 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-19 01:02:21.885372 | orchestrator | Thursday 19 March 2026 01:01:46 +0000 (0:00:00.464) 0:00:28.768 ******** 2026-03-19 01:02:21.885383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 01:02:21.885389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 01:02:21.885398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 01:02:21.885408 | orchestrator | 2026-03-19 01:02:21.885413 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-19 01:02:21.885418 | orchestrator | Thursday 19 March 2026 01:01:48 +0000 (0:00:01.564) 0:00:30.332 ******** 2026-03-19 01:02:21.885424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 01:02:21.885430 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:02:21.885436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 01:02:21.885441 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:02:21.885451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 01:02:21.885457 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:02:21.885466 | orchestrator | 2026-03-19 01:02:21.885472 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-19 01:02:21.885477 | orchestrator | Thursday 19 March 2026 01:01:48 +0000 (0:00:00.364) 0:00:30.696 ******** 2026-03-19 01:02:21.885485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 01:02:21.885491 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:02:21.885496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 01:02:21.885502 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:02:21.885507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 01:02:21.885512 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:02:21.885518 | orchestrator | 2026-03-19 01:02:21.885523 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-19 01:02:21.885528 | orchestrator | Thursday 19 March 2026 01:01:49 +0000 (0:00:00.750) 0:00:31.447 ******** 2026-03-19 01:02:21.885537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 01:02:21.885546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 01:02:21.885562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 01:02:21.885567 | orchestrator | 2026-03-19 01:02:21.885572 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-19 01:02:21.885577 | orchestrator | Thursday 19 March 2026 01:01:51 +0000 (0:00:01.888) 0:00:33.336 ******** 2026-03-19 01:02:21.885582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 01:02:21.885591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 01:02:21.885600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 01:02:21.885605 | orchestrator | 2026-03-19 01:02:21.885610 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-19 01:02:21.885615 | orchestrator | Thursday 19 March 2026 01:01:53 +0000 (0:00:01.939) 0:00:35.276 ******** 2026-03-19 01:02:21.885621 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-19 01:02:21.885627 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-19 01:02:21.885638 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-19 01:02:21.885645 | orchestrator | 2026-03-19 01:02:21.885651 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-19 01:02:21.885657 | orchestrator | Thursday 19 March 2026 01:01:54 +0000 (0:00:01.270) 0:00:36.546 ******** 2026-03-19 01:02:21.885662 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:02:21.885669 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:02:21.885674 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:02:21.885679 | orchestrator | 2026-03-19 01:02:21.885684 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-19 01:02:21.885689 | orchestrator | Thursday 19 March 2026 01:01:55 +0000 (0:00:01.242) 0:00:37.789 ******** 2026-03-19 01:02:21.885695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 01:02:21.885700 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:02:21.885709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 01:02:21.885718 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:02:21.885724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 01:02:21.885729 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:02:21.885735 | orchestrator | 2026-03-19 01:02:21.885740 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-19 01:02:21.885744 | orchestrator | Thursday 19 March 2026 01:01:56 +0000 (0:00:00.558) 0:00:38.347 ******** 2026-03-19 01:02:21.885752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 01:02:21.885758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 01:02:21.885763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 01:02:21.885772 | orchestrator | 2026-03-19 01:02:21.885777 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-19 01:02:21.885781 | orchestrator | Thursday 19 March 2026 01:01:57 +0000 (0:00:00.962) 0:00:39.310 ******** 2026-03-19 01:02:21.885786 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:02:21.885791 | orchestrator | 2026-03-19 01:02:21.885799 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-19 01:02:21.885804 | orchestrator | Thursday 19 March 2026 01:01:59 +0000 (0:00:01.935) 0:00:41.245 ******** 2026-03-19 01:02:21.885808 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:02:21.885813 | orchestrator | 2026-03-19 01:02:21.885818 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-19 01:02:21.885824 | orchestrator | Thursday 19 March 2026 01:02:01 +0000 (0:00:02.034) 0:00:43.280 ******** 2026-03-19 01:02:21.885828 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:02:21.885833 | orchestrator | 2026-03-19 01:02:21.885838 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-19 01:02:21.885842 | orchestrator | Thursday 19 March 2026 01:02:16 +0000 (0:00:15.230) 0:00:58.511 ******** 2026-03-19 01:02:21.885847 | orchestrator | 2026-03-19 01:02:21.885852 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-19 01:02:21.885857 | orchestrator | Thursday 19 March 2026 01:02:16 +0000 (0:00:00.059) 0:00:58.571 ******** 2026-03-19 01:02:21.885862 | orchestrator | 2026-03-19 01:02:21.885867 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-19 01:02:21.885872 | orchestrator | Thursday 19 March 2026 01:02:16 +0000 (0:00:00.058) 0:00:58.630 ******** 2026-03-19 01:02:21.885877 | orchestrator | 2026-03-19 01:02:21.885882 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-19 01:02:21.885887 | orchestrator | Thursday 19 March 2026 01:02:16 +0000 (0:00:00.063) 0:00:58.693 ******** 2026-03-19 01:02:21.885891 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:02:21.885896 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:02:21.885901 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:02:21.885906 | orchestrator | 2026-03-19 01:02:21.885911 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:02:21.885916 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 01:02:21.885922 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 01:02:21.885928 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 01:02:21.885947 | orchestrator | 2026-03-19 01:02:21.885953 | orchestrator | 2026-03-19 01:02:21.885958 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:02:21.885972 | orchestrator | Thursday 19 March 2026 01:02:21 +0000 (0:00:04.343) 0:01:03.036 ******** 2026-03-19 01:02:21.885978 | orchestrator | =============================================================================== 2026-03-19 01:02:21.885983 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.23s 2026-03-19 01:02:21.885988 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.72s 2026-03-19 01:02:21.885994 | orchestrator | placement : Restart placement-api container ----------------------------- 4.34s 2026-03-19 01:02:21.885999 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.98s 2026-03-19 01:02:21.886005 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.68s 2026-03-19 01:02:21.886052 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.50s 2026-03-19 01:02:21.886060 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.16s 2026-03-19 01:02:21.886064 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.01s 2026-03-19 01:02:21.886070 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.03s 2026-03-19 01:02:21.886074 | orchestrator | placement : Copying over placement.conf --------------------------------- 1.94s 2026-03-19 01:02:21.886080 | orchestrator | placement : Creating placement databases -------------------------------- 1.94s 2026-03-19 01:02:21.886085 | orchestrator | placement : Copying over config.json files for services ----------------- 1.89s 2026-03-19 01:02:21.886090 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.78s 2026-03-19 01:02:21.886096 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.56s 2026-03-19 01:02:21.886102 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.27s 2026-03-19 01:02:21.886107 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.24s 2026-03-19 01:02:21.886145 | orchestrator | placement : Check placement containers ---------------------------------- 0.96s 2026-03-19 01:02:21.886151 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.75s 2026-03-19 01:02:21.886157 | orchestrator | placement : include_tasks ----------------------------------------------- 0.59s 2026-03-19 01:02:21.886162 | orchestrator | placement : Copying over existing policy file --------------------------- 0.56s 2026-03-19 01:02:21.886168 | orchestrator | 2026-03-19 01:02:21 | INFO  | Task f99c28cd-dacc-4fc2-a5af-ea5cebbcf95e is in state SUCCESS 2026-03-19 01:02:21.886229 | orchestrator | 2026-03-19 01:02:21 | INFO  | Task f6a36b51-c4d2-4bac-adbc-9ee896ca4ad9 is in state SUCCESS 2026-03-19 01:02:21.886524 | orchestrator | 2026-03-19 01:02:21.886549 | orchestrator | 2026-03-19 01:02:21.886557 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:02:21.886565 | orchestrator | 2026-03-19 01:02:21.886572 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 01:02:21.886579 | orchestrator | Thursday 19 March 2026 00:59:28 +0000 (0:00:00.444) 0:00:00.444 ******** 2026-03-19 01:02:21.886586 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:02:21.886594 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:02:21.886600 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:02:21.886606 | orchestrator | 2026-03-19 01:02:21.886613 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 01:02:21.886619 | orchestrator | Thursday 19 March 2026 00:59:29 +0000 (0:00:00.298) 0:00:00.743 ******** 2026-03-19 01:02:21.886626 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-19 01:02:21.886633 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-19 01:02:21.886640 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-19 01:02:21.886647 | orchestrator | 2026-03-19 01:02:21.886654 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-19 01:02:21.886659 | orchestrator | 2026-03-19 01:02:21.886665 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-19 01:02:21.886671 | orchestrator | Thursday 19 March 2026 00:59:29 +0000 (0:00:00.288) 0:00:01.031 ******** 2026-03-19 01:02:21.886677 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:02:21.886683 | orchestrator | 2026-03-19 01:02:21.886688 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-19 01:02:21.886694 | orchestrator | Thursday 19 March 2026 00:59:29 +0000 (0:00:00.557) 0:00:01.589 ******** 2026-03-19 01:02:21.886700 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-19 01:02:21.886707 | orchestrator | 2026-03-19 01:02:21.886714 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-19 01:02:21.886738 | orchestrator | Thursday 19 March 2026 00:59:34 +0000 (0:00:05.054) 0:00:06.644 ******** 2026-03-19 01:02:21.886745 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-19 01:02:21.886752 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-19 01:02:21.886774 | orchestrator | 2026-03-19 01:02:21.886781 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-19 01:02:21.886788 | orchestrator | Thursday 19 March 2026 00:59:42 +0000 (0:00:07.628) 0:00:14.272 ******** 2026-03-19 01:02:21.886795 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-19 01:02:21.886802 | orchestrator | 2026-03-19 01:02:21.886808 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-19 01:02:21.886815 | orchestrator | Thursday 19 March 2026 00:59:46 +0000 (0:00:03.422) 0:00:17.694 ******** 2026-03-19 01:02:21.886829 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-19 01:02:21.886836 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 01:02:21.886843 | orchestrator | 2026-03-19 01:02:21.886850 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-19 01:02:21.886857 | orchestrator | Thursday 19 March 2026 00:59:49 +0000 (0:00:03.639) 0:00:21.334 ******** 2026-03-19 01:02:21.886864 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 01:02:21.886871 | orchestrator | 2026-03-19 01:02:21.886877 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-19 01:02:21.886884 | orchestrator | Thursday 19 March 2026 00:59:53 +0000 (0:00:03.350) 0:00:24.685 ******** 2026-03-19 01:02:21.886890 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-19 01:02:21.886897 | orchestrator | 2026-03-19 01:02:21.886904 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-19 01:02:21.886911 | orchestrator | Thursday 19 March 2026 00:59:57 +0000 (0:00:04.431) 0:00:29.116 ******** 2026-03-19 01:02:21.886920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 01:02:21.886957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.886967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 01:02:21.886981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 01:02:21.886993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887118 | orchestrator | 2026-03-19 01:02:21.887125 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-19 01:02:21.887132 | orchestrator | Thursday 19 March 2026 01:00:01 +0000 (0:00:03.698) 0:00:32.815 ******** 2026-03-19 01:02:21.887139 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:02:21.887147 | orchestrator | 2026-03-19 01:02:21.887154 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-19 01:02:21.887162 | orchestrator | Thursday 19 March 2026 01:00:01 +0000 (0:00:00.288) 0:00:33.103 ******** 2026-03-19 01:02:21.887169 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:02:21.887176 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:02:21.887184 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:02:21.887191 | orchestrator | 2026-03-19 01:02:21.887198 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-19 01:02:21.887205 | orchestrator | Thursday 19 March 2026 01:00:01 +0000 (0:00:00.281) 0:00:33.385 ******** 2026-03-19 01:02:21.887212 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:02:21.887219 | orchestrator | 2026-03-19 01:02:21.887227 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-19 01:02:21.887234 | orchestrator | Thursday 19 March 2026 01:00:02 +0000 (0:00:00.517) 0:00:33.902 ******** 2026-03-19 01:02:21.887242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 01:02:21.887258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 01:02:21.887266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 01:02:21.887276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887425 | orchestrator | 2026-03-19 01:02:21.887432 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-19 01:02:21.887440 | orchestrator | Thursday 19 March 2026 01:00:08 +0000 (0:00:05.868) 0:00:39.770 ******** 2026-03-19 01:02:21.887449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 01:02:21.887465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 01:02:21.887478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887512 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:02:21.887520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 01:02:21.887531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 01:02:21.887542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887574 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:02:21.887581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 01:02:21.887593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 01:02:21.887604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887633 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:02:21.887640 | orchestrator | 2026-03-19 01:02:21.887649 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-19 01:02:21.887655 | orchestrator | Thursday 19 March 2026 01:00:08 +0000 (0:00:00.890) 0:00:40.661 ******** 2026-03-19 01:02:21.887661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 01:02:21.887672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 01:02:21.887683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887712 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:02:21.887722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 01:02:21.887734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 01:02:21.887744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 01:02:21.887801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 01:02:21.887810 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:02:21.887817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.887850 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:02:21.887857 | orchestrator | 2026-03-19 01:02:21.887863 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-19 01:02:21.887876 | orchestrator | Thursday 19 March 2026 01:00:09 +0000 (0:00:00.928) 0:00:41.589 ******** 2026-03-19 01:02:21.887887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 01:02:21.887894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.887905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 01:02:21.887913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 01:02:21.887920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888139 | orchestrator | 2026-03-19 01:02:21.888151 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-19 01:02:21.888158 | orchestrator | Thursday 19 March 2026 01:00:16 +0000 (0:00:07.030) 0:00:48.620 ******** 2026-03-19 01:02:21.888167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 01:02:21.888175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 01:02:21.888181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 01:02:21.888192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888330 | orchestrator | 2026-03-19 01:02:21.888337 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-19 01:02:21.888344 | orchestrator | Thursday 19 March 2026 01:00:38 +0000 (0:00:21.897) 0:01:10.518 ******** 2026-03-19 01:02:21.888350 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-19 01:02:21.888358 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-19 01:02:21.888364 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-19 01:02:21.888371 | orchestrator | 2026-03-19 01:02:21.888378 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-19 01:02:21.888385 | orchestrator | Thursday 19 March 2026 01:00:44 +0000 (0:00:05.568) 0:01:16.086 ******** 2026-03-19 01:02:21.888391 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-19 01:02:21.888399 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-19 01:02:21.888406 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-19 01:02:21.888413 | orchestrator | 2026-03-19 01:02:21.888420 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-19 01:02:21.888427 | orchestrator | Thursday 19 March 2026 01:00:48 +0000 (0:00:04.547) 0:01:20.633 ******** 2026-03-19 01:02:21.888437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 01:02:21.888445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 01:02:21.888457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 01:02:21.888468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888596 | orchestrator | 2026-03-19 01:02:21.888603 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-19 01:02:21.888610 | orchestrator | Thursday 19 March 2026 01:00:52 +0000 (0:00:03.355) 0:01:23.989 ******** 2026-03-19 01:02:21.888622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 01:02:21.888629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 01:02:21.888636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 01:02:21.888650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.888767 | orchestrator | 2026-03-19 01:02:21.888774 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-19 01:02:21.888780 | orchestrator | Thursday 19 March 2026 01:00:55 +0000 (0:00:03.344) 0:01:27.334 ******** 2026-03-19 01:02:21.888787 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:02:21.888793 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:02:21.888800 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:02:21.888806 | orchestrator | 2026-03-19 01:02:21.888812 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-19 01:02:21.888819 | orchestrator | Thursday 19 March 2026 01:00:56 +0000 (0:00:00.541) 0:01:27.875 ******** 2026-03-19 01:02:21.888825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 01:02:21.888834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 01:02:21.888842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888877 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:02:21.888884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 01:02:21.888893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 01:02:21.888900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888947 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:02:21.888955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 01:02:21.888964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 01:02:21.888970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.888999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:02:21.889006 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:02:21.889011 | orchestrator | 2026-03-19 01:02:21.889018 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-19 01:02:21.889024 | orchestrator | Thursday 19 March 2026 01:00:57 +0000 (0:00:01.115) 0:01:28.991 ******** 2026-03-19 01:02:21.889030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 01:02:21.889040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 01:02:21.889051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 01:02:21.889058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.889069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.889076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 01:02:21.889083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.889092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.889105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.889112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.889122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.889130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.889137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.889143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.889157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.889165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.889171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.889180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:02:21.889187 | orchestrator | 2026-03-19 01:02:21.889194 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-19 01:02:21.889200 | orchestrator | Thursday 19 March 2026 01:01:01 +0000 (0:00:04.057) 0:01:33.049 ******** 2026-03-19 01:02:21.889206 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:02:21.889213 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:02:21.889219 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:02:21.889226 | orchestrator | 2026-03-19 01:02:21.889232 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-19 01:02:21.889238 | orchestrator | Thursday 19 March 2026 01:01:01 +0000 (0:00:00.492) 0:01:33.541 ******** 2026-03-19 01:02:21.889245 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-19 01:02:21.889251 | orchestrator | 2026-03-19 01:02:21.889258 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-19 01:02:21.889264 | orchestrator | Thursday 19 March 2026 01:01:04 +0000 (0:00:02.187) 0:01:35.729 ******** 2026-03-19 01:02:21.889271 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 01:02:21.889277 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-19 01:02:21.889284 | orchestrator | 2026-03-19 01:02:21.889290 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-19 01:02:21.889297 | orchestrator | Thursday 19 March 2026 01:01:06 +0000 (0:00:02.835) 0:01:38.564 ******** 2026-03-19 01:02:21.889303 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:02:21.889309 | orchestrator | 2026-03-19 01:02:21.889320 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-19 01:02:21.889326 | orchestrator | Thursday 19 March 2026 01:01:21 +0000 (0:00:14.134) 0:01:52.699 ******** 2026-03-19 01:02:21.889332 | orchestrator | 2026-03-19 01:02:21.889339 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-19 01:02:21.889345 | orchestrator | Thursday 19 March 2026 01:01:21 +0000 (0:00:00.050) 0:01:52.750 ******** 2026-03-19 01:02:21.889351 | orchestrator | 2026-03-19 01:02:21.889357 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-19 01:02:21.889363 | orchestrator | Thursday 19 March 2026 01:01:21 +0000 (0:00:00.049) 0:01:52.799 ******** 2026-03-19 01:02:21.889370 | orchestrator | 2026-03-19 01:02:21.889376 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-19 01:02:21.889382 | orchestrator | Thursday 19 March 2026 01:01:21 +0000 (0:00:00.051) 0:01:52.850 ******** 2026-03-19 01:02:21.889388 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:02:21.889394 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:02:21.889399 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:02:21.889405 | orchestrator | 2026-03-19 01:02:21.889411 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-19 01:02:21.889416 | orchestrator | Thursday 19 March 2026 01:01:34 +0000 (0:00:13.386) 0:02:06.236 ******** 2026-03-19 01:02:21.889422 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:02:21.889431 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:02:21.889437 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:02:21.889444 | orchestrator | 2026-03-19 01:02:21.889450 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-19 01:02:21.889456 | orchestrator | Thursday 19 March 2026 01:01:43 +0000 (0:00:09.211) 0:02:15.447 ******** 2026-03-19 01:02:21.889462 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:02:21.889468 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:02:21.889474 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:02:21.889480 | orchestrator | 2026-03-19 01:02:21.889487 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-19 01:02:21.889493 | orchestrator | Thursday 19 March 2026 01:01:49 +0000 (0:00:05.221) 0:02:20.669 ******** 2026-03-19 01:02:21.889499 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:02:21.889505 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:02:21.889512 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:02:21.889518 | orchestrator | 2026-03-19 01:02:21.889525 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-19 01:02:21.889531 | orchestrator | Thursday 19 March 2026 01:01:59 +0000 (0:00:10.202) 0:02:30.872 ******** 2026-03-19 01:02:21.889537 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:02:21.889543 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:02:21.889550 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:02:21.889556 | orchestrator | 2026-03-19 01:02:21.889563 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-19 01:02:21.889569 | orchestrator | Thursday 19 March 2026 01:02:08 +0000 (0:00:09.247) 0:02:40.120 ******** 2026-03-19 01:02:21.889575 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:02:21.889582 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:02:21.889588 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:02:21.889594 | orchestrator | 2026-03-19 01:02:21.889600 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-19 01:02:21.889606 | orchestrator | Thursday 19 March 2026 01:02:13 +0000 (0:00:05.335) 0:02:45.455 ******** 2026-03-19 01:02:21.889612 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:02:21.889618 | orchestrator | 2026-03-19 01:02:21.889625 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:02:21.889632 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 01:02:21.889639 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 01:02:21.889652 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 01:02:21.889657 | orchestrator | 2026-03-19 01:02:21.889663 | orchestrator | 2026-03-19 01:02:21.889675 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:02:21.889681 | orchestrator | Thursday 19 March 2026 01:02:21 +0000 (0:00:07.668) 0:02:53.123 ******** 2026-03-19 01:02:21.889688 | orchestrator | =============================================================================== 2026-03-19 01:02:21.889694 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.90s 2026-03-19 01:02:21.889701 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.13s 2026-03-19 01:02:21.889707 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.39s 2026-03-19 01:02:21.889714 | orchestrator | designate : Restart designate-producer container ----------------------- 10.20s 2026-03-19 01:02:21.889720 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.25s 2026-03-19 01:02:21.889726 | orchestrator | designate : Restart designate-api container ----------------------------- 9.21s 2026-03-19 01:02:21.889732 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.67s 2026-03-19 01:02:21.889739 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.63s 2026-03-19 01:02:21.889745 | orchestrator | designate : Copying over config.json files for services ----------------- 7.03s 2026-03-19 01:02:21.889751 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.87s 2026-03-19 01:02:21.889757 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.57s 2026-03-19 01:02:21.889764 | orchestrator | designate : Restart designate-worker container -------------------------- 5.34s 2026-03-19 01:02:21.889770 | orchestrator | designate : Restart designate-central container ------------------------- 5.22s 2026-03-19 01:02:21.889776 | orchestrator | service-ks-register : designate | Creating services --------------------- 5.05s 2026-03-19 01:02:21.889783 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.55s 2026-03-19 01:02:21.889789 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.43s 2026-03-19 01:02:21.889795 | orchestrator | designate : Check designate containers ---------------------------------- 4.06s 2026-03-19 01:02:21.889801 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.70s 2026-03-19 01:02:21.889808 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.64s 2026-03-19 01:02:21.889814 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.42s 2026-03-19 01:02:21.889820 | orchestrator | 2026-03-19 01:02:21 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:21.890896 | orchestrator | 2026-03-19 01:02:21 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:21.891075 | orchestrator | 2026-03-19 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:24.948290 | orchestrator | 2026-03-19 01:02:24 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:24.948815 | orchestrator | 2026-03-19 01:02:24 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:24.949511 | orchestrator | 2026-03-19 01:02:24 | INFO  | Task 6fa554bb-8205-45ae-9df8-82c35e31007c is in state STARTED 2026-03-19 01:02:24.950702 | orchestrator | 2026-03-19 01:02:24 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:02:24.951552 | orchestrator | 2026-03-19 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:27.982397 | orchestrator | 2026-03-19 01:02:27 | INFO  | Task e18e605e-78bf-47c6-9398-b264903ab62f is in state STARTED 2026-03-19 01:02:27.983770 | orchestrator | 2026-03-19 01:02:27 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:27.984380 | orchestrator | 2026-03-19 01:02:27 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:27.985299 | orchestrator | 2026-03-19 01:02:27 | INFO  | Task 6fa554bb-8205-45ae-9df8-82c35e31007c is in state SUCCESS 2026-03-19 01:02:27.985875 | orchestrator | 2026-03-19 01:02:27 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:02:27.985905 | orchestrator | 2026-03-19 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:31.040433 | orchestrator | 2026-03-19 01:02:31 | INFO  | Task e18e605e-78bf-47c6-9398-b264903ab62f is in state STARTED 2026-03-19 01:02:31.040511 | orchestrator | 2026-03-19 01:02:31 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:31.040526 | orchestrator | 2026-03-19 01:02:31 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:31.040539 | orchestrator | 2026-03-19 01:02:31 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:02:31.040551 | orchestrator | 2026-03-19 01:02:31 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:34.064556 | orchestrator | 2026-03-19 01:02:34 | INFO  | Task e18e605e-78bf-47c6-9398-b264903ab62f is in state STARTED 2026-03-19 01:02:34.064802 | orchestrator | 2026-03-19 01:02:34 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:34.064820 | orchestrator | 2026-03-19 01:02:34 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:34.065757 | orchestrator | 2026-03-19 01:02:34 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:02:34.065793 | orchestrator | 2026-03-19 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:37.093708 | orchestrator | 2026-03-19 01:02:37 | INFO  | Task e18e605e-78bf-47c6-9398-b264903ab62f is in state STARTED 2026-03-19 01:02:37.094155 | orchestrator | 2026-03-19 01:02:37 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:37.095561 | orchestrator | 2026-03-19 01:02:37 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:37.096316 | orchestrator | 2026-03-19 01:02:37 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:02:37.096369 | orchestrator | 2026-03-19 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:40.141269 | orchestrator | 2026-03-19 01:02:40 | INFO  | Task e18e605e-78bf-47c6-9398-b264903ab62f is in state STARTED 2026-03-19 01:02:40.142691 | orchestrator | 2026-03-19 01:02:40 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:40.144382 | orchestrator | 2026-03-19 01:02:40 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:40.144420 | orchestrator | 2026-03-19 01:02:40 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:02:40.144425 | orchestrator | 2026-03-19 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:43.178251 | orchestrator | 2026-03-19 01:02:43 | INFO  | Task e18e605e-78bf-47c6-9398-b264903ab62f is in state STARTED 2026-03-19 01:02:43.178319 | orchestrator | 2026-03-19 01:02:43 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:43.178998 | orchestrator | 2026-03-19 01:02:43 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:43.179536 | orchestrator | 2026-03-19 01:02:43 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:02:43.181421 | orchestrator | 2026-03-19 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:46.214825 | orchestrator | 2026-03-19 01:02:46 | INFO  | Task e18e605e-78bf-47c6-9398-b264903ab62f is in state STARTED 2026-03-19 01:02:46.216438 | orchestrator | 2026-03-19 01:02:46 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:46.217543 | orchestrator | 2026-03-19 01:02:46 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:46.218678 | orchestrator | 2026-03-19 01:02:46 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:02:46.218758 | orchestrator | 2026-03-19 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:49.260726 | orchestrator | 2026-03-19 01:02:49 | INFO  | Task e18e605e-78bf-47c6-9398-b264903ab62f is in state STARTED 2026-03-19 01:02:49.262451 | orchestrator | 2026-03-19 01:02:49 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:49.264794 | orchestrator | 2026-03-19 01:02:49 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:49.270222 | orchestrator | 2026-03-19 01:02:49 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:02:49.270279 | orchestrator | 2026-03-19 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:52.302538 | orchestrator | 2026-03-19 01:02:52 | INFO  | Task e18e605e-78bf-47c6-9398-b264903ab62f is in state STARTED 2026-03-19 01:02:52.302610 | orchestrator | 2026-03-19 01:02:52 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:52.304593 | orchestrator | 2026-03-19 01:02:52 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:52.305411 | orchestrator | 2026-03-19 01:02:52 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:02:52.305445 | orchestrator | 2026-03-19 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:55.333096 | orchestrator | 2026-03-19 01:02:55 | INFO  | Task e18e605e-78bf-47c6-9398-b264903ab62f is in state STARTED 2026-03-19 01:02:55.335013 | orchestrator | 2026-03-19 01:02:55 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:55.335759 | orchestrator | 2026-03-19 01:02:55 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:55.336296 | orchestrator | 2026-03-19 01:02:55 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:02:55.336341 | orchestrator | 2026-03-19 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:02:58.368174 | orchestrator | 2026-03-19 01:02:58 | INFO  | Task e18e605e-78bf-47c6-9398-b264903ab62f is in state STARTED 2026-03-19 01:02:58.369037 | orchestrator | 2026-03-19 01:02:58 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:02:58.369627 | orchestrator | 2026-03-19 01:02:58 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:02:58.371468 | orchestrator | 2026-03-19 01:02:58 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:02:58.371504 | orchestrator | 2026-03-19 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:01.399354 | orchestrator | 2026-03-19 01:03:01 | INFO  | Task e18e605e-78bf-47c6-9398-b264903ab62f is in state STARTED 2026-03-19 01:03:01.400169 | orchestrator | 2026-03-19 01:03:01 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:01.401000 | orchestrator | 2026-03-19 01:03:01 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:03:01.401489 | orchestrator | 2026-03-19 01:03:01 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:01.401554 | orchestrator | 2026-03-19 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:04.429268 | orchestrator | 2026-03-19 01:03:04 | INFO  | Task e18e605e-78bf-47c6-9398-b264903ab62f is in state STARTED 2026-03-19 01:03:04.430073 | orchestrator | 2026-03-19 01:03:04 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:04.430577 | orchestrator | 2026-03-19 01:03:04 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:03:04.431362 | orchestrator | 2026-03-19 01:03:04 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:04.431397 | orchestrator | 2026-03-19 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:07.478884 | orchestrator | 2026-03-19 01:03:07 | INFO  | Task e18e605e-78bf-47c6-9398-b264903ab62f is in state SUCCESS 2026-03-19 01:03:07.479744 | orchestrator | 2026-03-19 01:03:07 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:07.480520 | orchestrator | 2026-03-19 01:03:07 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:03:07.482411 | orchestrator | 2026-03-19 01:03:07 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:07.483533 | orchestrator | 2026-03-19 01:03:07 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:07.483576 | orchestrator | 2026-03-19 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:10.531424 | orchestrator | 2026-03-19 01:03:10 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:10.532537 | orchestrator | 2026-03-19 01:03:10 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:03:10.533630 | orchestrator | 2026-03-19 01:03:10 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:10.535510 | orchestrator | 2026-03-19 01:03:10 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:10.535724 | orchestrator | 2026-03-19 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:13.578927 | orchestrator | 2026-03-19 01:03:13 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:13.579613 | orchestrator | 2026-03-19 01:03:13 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:03:13.581772 | orchestrator | 2026-03-19 01:03:13 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:13.585340 | orchestrator | 2026-03-19 01:03:13 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:13.585766 | orchestrator | 2026-03-19 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:16.649627 | orchestrator | 2026-03-19 01:03:16 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:16.650235 | orchestrator | 2026-03-19 01:03:16 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:03:16.651069 | orchestrator | 2026-03-19 01:03:16 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:16.651880 | orchestrator | 2026-03-19 01:03:16 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:16.651940 | orchestrator | 2026-03-19 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:19.679435 | orchestrator | 2026-03-19 01:03:19 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:19.679552 | orchestrator | 2026-03-19 01:03:19 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:03:19.680188 | orchestrator | 2026-03-19 01:03:19 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:19.682622 | orchestrator | 2026-03-19 01:03:19 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:19.682675 | orchestrator | 2026-03-19 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:22.727102 | orchestrator | 2026-03-19 01:03:22 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:22.728890 | orchestrator | 2026-03-19 01:03:22 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:03:22.730524 | orchestrator | 2026-03-19 01:03:22 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:22.732181 | orchestrator | 2026-03-19 01:03:22 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:22.732239 | orchestrator | 2026-03-19 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:25.780159 | orchestrator | 2026-03-19 01:03:25 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:25.783648 | orchestrator | 2026-03-19 01:03:25 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state STARTED 2026-03-19 01:03:25.786226 | orchestrator | 2026-03-19 01:03:25 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:25.788501 | orchestrator | 2026-03-19 01:03:25 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:25.788606 | orchestrator | 2026-03-19 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:28.826301 | orchestrator | 2026-03-19 01:03:28 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:28.827531 | orchestrator | 2026-03-19 01:03:28 | INFO  | Task 7950df76-39c3-4578-b4bb-00a65cb8aa1b is in state SUCCESS 2026-03-19 01:03:28.828033 | orchestrator | 2026-03-19 01:03:28.828068 | orchestrator | 2026-03-19 01:03:28.828075 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:03:28.828081 | orchestrator | 2026-03-19 01:03:28.828086 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 01:03:28.828091 | orchestrator | Thursday 19 March 2026 01:02:24 +0000 (0:00:00.147) 0:00:00.147 ******** 2026-03-19 01:03:28.828096 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:03:28.828102 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:03:28.828107 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:03:28.828113 | orchestrator | 2026-03-19 01:03:28.828118 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 01:03:28.828124 | orchestrator | Thursday 19 March 2026 01:02:24 +0000 (0:00:00.278) 0:00:00.426 ******** 2026-03-19 01:03:28.828129 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-19 01:03:28.828135 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-19 01:03:28.828141 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-19 01:03:28.828146 | orchestrator | 2026-03-19 01:03:28.828151 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-19 01:03:28.828157 | orchestrator | 2026-03-19 01:03:28.828162 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-19 01:03:28.828186 | orchestrator | Thursday 19 March 2026 01:02:24 +0000 (0:00:00.397) 0:00:00.823 ******** 2026-03-19 01:03:28.828191 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:03:28.828197 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:03:28.828202 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:03:28.828208 | orchestrator | 2026-03-19 01:03:28.828213 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:03:28.828219 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:03:28.828226 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:03:28.828231 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:03:28.828236 | orchestrator | 2026-03-19 01:03:28.828242 | orchestrator | 2026-03-19 01:03:28.828249 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:03:28.828254 | orchestrator | Thursday 19 March 2026 01:02:25 +0000 (0:00:00.961) 0:00:01.785 ******** 2026-03-19 01:03:28.828259 | orchestrator | =============================================================================== 2026-03-19 01:03:28.828265 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.96s 2026-03-19 01:03:28.828271 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-03-19 01:03:28.828277 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-03-19 01:03:28.828282 | orchestrator | 2026-03-19 01:03:28.828288 | orchestrator | 2026-03-19 01:03:28.828294 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:03:28.828299 | orchestrator | 2026-03-19 01:03:28.828304 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 01:03:28.828310 | orchestrator | Thursday 19 March 2026 01:02:31 +0000 (0:00:00.770) 0:00:00.770 ******** 2026-03-19 01:03:28.828315 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:03:28.828321 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:03:28.828326 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:03:28.828331 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:03:28.828336 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:03:28.828341 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:03:28.828346 | orchestrator | ok: [testbed-manager] 2026-03-19 01:03:28.828351 | orchestrator | 2026-03-19 01:03:28.828356 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 01:03:28.828361 | orchestrator | Thursday 19 March 2026 01:02:32 +0000 (0:00:01.058) 0:00:01.828 ******** 2026-03-19 01:03:28.828367 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-19 01:03:28.828372 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-19 01:03:28.828377 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-19 01:03:28.828382 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-19 01:03:28.828388 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-19 01:03:28.828393 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-19 01:03:28.828398 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-19 01:03:28.828403 | orchestrator | 2026-03-19 01:03:28.828408 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-19 01:03:28.828413 | orchestrator | 2026-03-19 01:03:28.828419 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-19 01:03:28.828475 | orchestrator | Thursday 19 March 2026 01:02:33 +0000 (0:00:01.349) 0:00:03.178 ******** 2026-03-19 01:03:28.828482 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-19 01:03:28.828489 | orchestrator | 2026-03-19 01:03:28.828495 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-19 01:03:28.828512 | orchestrator | Thursday 19 March 2026 01:02:36 +0000 (0:00:02.746) 0:00:05.925 ******** 2026-03-19 01:03:28.828517 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-03-19 01:03:28.828523 | orchestrator | 2026-03-19 01:03:28.828579 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-19 01:03:28.828586 | orchestrator | Thursday 19 March 2026 01:02:40 +0000 (0:00:03.876) 0:00:09.802 ******** 2026-03-19 01:03:28.828592 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-19 01:03:28.828609 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-19 01:03:28.828615 | orchestrator | 2026-03-19 01:03:28.828621 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-19 01:03:28.828626 | orchestrator | Thursday 19 March 2026 01:02:46 +0000 (0:00:06.609) 0:00:16.411 ******** 2026-03-19 01:03:28.828631 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 01:03:28.828636 | orchestrator | 2026-03-19 01:03:28.828643 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-19 01:03:28.828648 | orchestrator | Thursday 19 March 2026 01:02:50 +0000 (0:00:03.296) 0:00:19.708 ******** 2026-03-19 01:03:28.828653 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-03-19 01:03:28.828659 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 01:03:28.828663 | orchestrator | 2026-03-19 01:03:28.828669 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-19 01:03:28.828674 | orchestrator | Thursday 19 March 2026 01:02:54 +0000 (0:00:03.997) 0:00:23.706 ******** 2026-03-19 01:03:28.828680 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 01:03:28.828685 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-03-19 01:03:28.828691 | orchestrator | 2026-03-19 01:03:28.828696 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-19 01:03:28.828702 | orchestrator | Thursday 19 March 2026 01:02:59 +0000 (0:00:05.619) 0:00:29.325 ******** 2026-03-19 01:03:28.828707 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-03-19 01:03:28.828712 | orchestrator | 2026-03-19 01:03:28.828718 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:03:28.828723 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:03:28.828729 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:03:28.828734 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:03:28.828740 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:03:28.828745 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:03:28.828750 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:03:28.828756 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:03:28.828761 | orchestrator | 2026-03-19 01:03:28.828767 | orchestrator | 2026-03-19 01:03:28.828772 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:03:28.828777 | orchestrator | Thursday 19 March 2026 01:03:05 +0000 (0:00:06.034) 0:00:35.360 ******** 2026-03-19 01:03:28.828783 | orchestrator | =============================================================================== 2026-03-19 01:03:28.828793 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.61s 2026-03-19 01:03:28.828798 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.03s 2026-03-19 01:03:28.828814 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.62s 2026-03-19 01:03:28.828819 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.00s 2026-03-19 01:03:28.828824 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.88s 2026-03-19 01:03:28.828830 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.30s 2026-03-19 01:03:28.828863 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.75s 2026-03-19 01:03:28.828868 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.35s 2026-03-19 01:03:28.828871 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.06s 2026-03-19 01:03:28.828885 | orchestrator | 2026-03-19 01:03:28.829109 | orchestrator | 2026-03-19 01:03:28.829120 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:03:28.829123 | orchestrator | 2026-03-19 01:03:28.829127 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 01:03:28.829130 | orchestrator | Thursday 19 March 2026 01:01:38 +0000 (0:00:00.368) 0:00:00.368 ******** 2026-03-19 01:03:28.829133 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:03:28.829137 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:03:28.829140 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:03:28.829143 | orchestrator | 2026-03-19 01:03:28.829146 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 01:03:28.829150 | orchestrator | Thursday 19 March 2026 01:01:38 +0000 (0:00:00.215) 0:00:00.583 ******** 2026-03-19 01:03:28.829153 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-19 01:03:28.829156 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-19 01:03:28.829164 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-19 01:03:28.829167 | orchestrator | 2026-03-19 01:03:28.829171 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-19 01:03:28.829174 | orchestrator | 2026-03-19 01:03:28.829177 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-19 01:03:28.829180 | orchestrator | Thursday 19 March 2026 01:01:38 +0000 (0:00:00.218) 0:00:00.802 ******** 2026-03-19 01:03:28.829184 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:03:28.829188 | orchestrator | 2026-03-19 01:03:28.829191 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-19 01:03:28.829194 | orchestrator | Thursday 19 March 2026 01:01:39 +0000 (0:00:01.184) 0:00:01.986 ******** 2026-03-19 01:03:28.829197 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-19 01:03:28.829200 | orchestrator | 2026-03-19 01:03:28.829203 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-19 01:03:28.829207 | orchestrator | Thursday 19 March 2026 01:01:43 +0000 (0:00:03.609) 0:00:05.595 ******** 2026-03-19 01:03:28.829210 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-19 01:03:28.829213 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-19 01:03:28.829216 | orchestrator | 2026-03-19 01:03:28.829219 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-19 01:03:28.829223 | orchestrator | Thursday 19 March 2026 01:01:50 +0000 (0:00:06.978) 0:00:12.574 ******** 2026-03-19 01:03:28.829226 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 01:03:28.829229 | orchestrator | 2026-03-19 01:03:28.829233 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-19 01:03:28.829236 | orchestrator | Thursday 19 March 2026 01:01:53 +0000 (0:00:02.990) 0:00:15.565 ******** 2026-03-19 01:03:28.829244 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-19 01:03:28.829248 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 01:03:28.829251 | orchestrator | 2026-03-19 01:03:28.829254 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-19 01:03:28.829257 | orchestrator | Thursday 19 March 2026 01:01:56 +0000 (0:00:03.570) 0:00:19.135 ******** 2026-03-19 01:03:28.829260 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 01:03:28.829264 | orchestrator | 2026-03-19 01:03:28.829267 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-19 01:03:28.829270 | orchestrator | Thursday 19 March 2026 01:01:59 +0000 (0:00:02.878) 0:00:22.013 ******** 2026-03-19 01:03:28.829273 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-19 01:03:28.829278 | orchestrator | 2026-03-19 01:03:28.829283 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-19 01:03:28.829288 | orchestrator | Thursday 19 March 2026 01:02:03 +0000 (0:00:03.447) 0:00:25.460 ******** 2026-03-19 01:03:28.829293 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:03:28.829302 | orchestrator | 2026-03-19 01:03:28.829306 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-19 01:03:28.829312 | orchestrator | Thursday 19 March 2026 01:02:06 +0000 (0:00:03.602) 0:00:29.063 ******** 2026-03-19 01:03:28.829317 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:03:28.829322 | orchestrator | 2026-03-19 01:03:28.829327 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-19 01:03:28.829332 | orchestrator | Thursday 19 March 2026 01:02:11 +0000 (0:00:04.416) 0:00:33.479 ******** 2026-03-19 01:03:28.829337 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:03:28.829342 | orchestrator | 2026-03-19 01:03:28.829347 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-19 01:03:28.829351 | orchestrator | Thursday 19 March 2026 01:02:15 +0000 (0:00:03.757) 0:00:37.237 ******** 2026-03-19 01:03:28.829364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.829377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.829381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.829390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.829394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.829400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.829404 | orchestrator | 2026-03-19 01:03:28.829410 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-19 01:03:28.829416 | orchestrator | Thursday 19 March 2026 01:02:16 +0000 (0:00:01.578) 0:00:38.815 ******** 2026-03-19 01:03:28.829424 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:28.829429 | orchestrator | 2026-03-19 01:03:28.829434 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-19 01:03:28.829439 | orchestrator | Thursday 19 March 2026 01:02:16 +0000 (0:00:00.104) 0:00:38.919 ******** 2026-03-19 01:03:28.829444 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:28.829449 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:28.829454 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:28.829459 | orchestrator | 2026-03-19 01:03:28.829464 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-19 01:03:28.829490 | orchestrator | Thursday 19 March 2026 01:02:16 +0000 (0:00:00.201) 0:00:39.121 ******** 2026-03-19 01:03:28.829496 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 01:03:28.829506 | orchestrator | 2026-03-19 01:03:28.829511 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-19 01:03:28.829517 | orchestrator | Thursday 19 March 2026 01:02:17 +0000 (0:00:00.713) 0:00:39.834 ******** 2026-03-19 01:03:28.829522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.829528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.829533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.829544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.829553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.829562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.829568 | orchestrator | 2026-03-19 01:03:28.829574 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-19 01:03:28.829579 | orchestrator | Thursday 19 March 2026 01:02:19 +0000 (0:00:02.043) 0:00:41.878 ******** 2026-03-19 01:03:28.829584 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:03:28.829589 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:03:28.829594 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:03:28.829599 | orchestrator | 2026-03-19 01:03:28.829605 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-19 01:03:28.829610 | orchestrator | Thursday 19 March 2026 01:02:20 +0000 (0:00:00.383) 0:00:42.261 ******** 2026-03-19 01:03:28.829617 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:03:28.829622 | orchestrator | 2026-03-19 01:03:28.829628 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-19 01:03:28.829633 | orchestrator | Thursday 19 March 2026 01:02:20 +0000 (0:00:00.468) 0:00:42.730 ******** 2026-03-19 01:03:28.829639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.829647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.829660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.829666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.829671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.829677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.829682 | orchestrator | 2026-03-19 01:03:28.829688 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-19 01:03:28.829693 | orchestrator | Thursday 19 March 2026 01:02:22 +0000 (0:00:01.865) 0:00:44.596 ******** 2026-03-19 01:03:28.829703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 01:03:28.829718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:03:28.829724 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:28.829730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 01:03:28.829736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:03:28.829742 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:28.829749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 01:03:28.829758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:03:28.829767 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:28.829773 | orchestrator | 2026-03-19 01:03:28.829779 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-19 01:03:28.829784 | orchestrator | Thursday 19 March 2026 01:02:23 +0000 (0:00:00.934) 0:00:45.530 ******** 2026-03-19 01:03:28.829790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 01:03:28.829796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:03:28.829813 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:28.829819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 01:03:28.829846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:03:28.829855 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:28.829866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 01:03:28.829874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:03:28.829880 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:28.829885 | orchestrator | 2026-03-19 01:03:28.829891 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-19 01:03:28.829897 | orchestrator | Thursday 19 March 2026 01:02:24 +0000 (0:00:00.797) 0:00:46.327 ******** 2026-03-19 01:03:28.829903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.829909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.829921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.829930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.829937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.829943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.829948 | orchestrator | 2026-03-19 01:03:28.829954 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-19 01:03:28.829960 | orchestrator | Thursday 19 March 2026 01:02:26 +0000 (0:00:02.077) 0:00:48.405 ******** 2026-03-19 01:03:28.829966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.829979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.829989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.829995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.830001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.830007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.830057 | orchestrator | 2026-03-19 01:03:28.830066 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-19 01:03:28.830071 | orchestrator | Thursday 19 March 2026 01:02:32 +0000 (0:00:06.505) 0:00:54.911 ******** 2026-03-19 01:03:28.830081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 01:03:28.830090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:03:28.830096 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:28.830102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 01:03:28.830107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:03:28.830113 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:28.830117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 01:03:28.830123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:03:28.830127 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:28.830131 | orchestrator | 2026-03-19 01:03:28.830135 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-19 01:03:28.830140 | orchestrator | Thursday 19 March 2026 01:02:33 +0000 (0:00:00.859) 0:00:55.770 ******** 2026-03-19 01:03:28.830148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.830154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.830159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 01:03:28.830169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.830178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.830187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:03:28.830193 | orchestrator | 2026-03-19 01:03:28.830199 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-19 01:03:28.830204 | orchestrator | Thursday 19 March 2026 01:02:35 +0000 (0:00:02.297) 0:00:58.067 ******** 2026-03-19 01:03:28.830209 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:28.830214 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:28.830219 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:28.830224 | orchestrator | 2026-03-19 01:03:28.830229 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-19 01:03:28.830234 | orchestrator | Thursday 19 March 2026 01:02:36 +0000 (0:00:00.542) 0:00:58.609 ******** 2026-03-19 01:03:28.830239 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:03:28.830244 | orchestrator | 2026-03-19 01:03:28.830249 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-19 01:03:28.830254 | orchestrator | Thursday 19 March 2026 01:02:38 +0000 (0:00:02.175) 0:01:00.785 ******** 2026-03-19 01:03:28.830263 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:03:28.830269 | orchestrator | 2026-03-19 01:03:28.830274 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-19 01:03:28.830280 | orchestrator | Thursday 19 March 2026 01:02:41 +0000 (0:00:02.499) 0:01:03.284 ******** 2026-03-19 01:03:28.830285 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:03:28.830288 | orchestrator | 2026-03-19 01:03:28.830291 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-19 01:03:28.830294 | orchestrator | Thursday 19 March 2026 01:02:56 +0000 (0:00:15.209) 0:01:18.494 ******** 2026-03-19 01:03:28.830297 | orchestrator | 2026-03-19 01:03:28.830300 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-19 01:03:28.830304 | orchestrator | Thursday 19 March 2026 01:02:56 +0000 (0:00:00.382) 0:01:18.876 ******** 2026-03-19 01:03:28.830307 | orchestrator | 2026-03-19 01:03:28.830310 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-19 01:03:28.830313 | orchestrator | Thursday 19 March 2026 01:02:56 +0000 (0:00:00.080) 0:01:18.956 ******** 2026-03-19 01:03:28.830316 | orchestrator | 2026-03-19 01:03:28.830320 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-19 01:03:28.830323 | orchestrator | Thursday 19 March 2026 01:02:56 +0000 (0:00:00.059) 0:01:19.016 ******** 2026-03-19 01:03:28.830326 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:03:28.830330 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:03:28.830333 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:03:28.830338 | orchestrator | 2026-03-19 01:03:28.830343 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-19 01:03:28.830348 | orchestrator | Thursday 19 March 2026 01:03:13 +0000 (0:00:16.895) 0:01:35.911 ******** 2026-03-19 01:03:28.830353 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:03:28.830359 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:03:28.830364 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:03:28.830370 | orchestrator | 2026-03-19 01:03:28.830375 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:03:28.830380 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 01:03:28.830386 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 01:03:28.830391 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 01:03:28.830396 | orchestrator | 2026-03-19 01:03:28.830401 | orchestrator | 2026-03-19 01:03:28.830407 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:03:28.830411 | orchestrator | Thursday 19 March 2026 01:03:27 +0000 (0:00:14.114) 0:01:50.026 ******** 2026-03-19 01:03:28.830415 | orchestrator | =============================================================================== 2026-03-19 01:03:28.830418 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 16.90s 2026-03-19 01:03:28.830424 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.21s 2026-03-19 01:03:28.830428 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.11s 2026-03-19 01:03:28.830431 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.98s 2026-03-19 01:03:28.830434 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.51s 2026-03-19 01:03:28.830439 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.42s 2026-03-19 01:03:28.830444 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.76s 2026-03-19 01:03:28.830449 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.61s 2026-03-19 01:03:28.830454 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.60s 2026-03-19 01:03:28.830465 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.57s 2026-03-19 01:03:28.830470 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.45s 2026-03-19 01:03:28.830475 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.99s 2026-03-19 01:03:28.830481 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 2.88s 2026-03-19 01:03:28.830487 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.50s 2026-03-19 01:03:28.830493 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.30s 2026-03-19 01:03:28.830497 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.18s 2026-03-19 01:03:28.830500 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.08s 2026-03-19 01:03:28.830504 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.04s 2026-03-19 01:03:28.830507 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 1.87s 2026-03-19 01:03:28.830510 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.58s 2026-03-19 01:03:28.830514 | orchestrator | 2026-03-19 01:03:28 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:28.830704 | orchestrator | 2026-03-19 01:03:28 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:28.830851 | orchestrator | 2026-03-19 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:31.870066 | orchestrator | 2026-03-19 01:03:31 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:03:31.870113 | orchestrator | 2026-03-19 01:03:31 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:31.870438 | orchestrator | 2026-03-19 01:03:31 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:31.871205 | orchestrator | 2026-03-19 01:03:31 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:31.871248 | orchestrator | 2026-03-19 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:34.894562 | orchestrator | 2026-03-19 01:03:34 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:03:34.894678 | orchestrator | 2026-03-19 01:03:34 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:34.894738 | orchestrator | 2026-03-19 01:03:34 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:34.895433 | orchestrator | 2026-03-19 01:03:34 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:34.895475 | orchestrator | 2026-03-19 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:37.916946 | orchestrator | 2026-03-19 01:03:37 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:03:37.917405 | orchestrator | 2026-03-19 01:03:37 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:37.918502 | orchestrator | 2026-03-19 01:03:37 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:37.918938 | orchestrator | 2026-03-19 01:03:37 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:37.918966 | orchestrator | 2026-03-19 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:40.949216 | orchestrator | 2026-03-19 01:03:40 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:03:40.949607 | orchestrator | 2026-03-19 01:03:40 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:40.950297 | orchestrator | 2026-03-19 01:03:40 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:40.951099 | orchestrator | 2026-03-19 01:03:40 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:40.951129 | orchestrator | 2026-03-19 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:43.985914 | orchestrator | 2026-03-19 01:03:43 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:03:43.987516 | orchestrator | 2026-03-19 01:03:43 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:43.989829 | orchestrator | 2026-03-19 01:03:43 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:43.992498 | orchestrator | 2026-03-19 01:03:43 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:43.992566 | orchestrator | 2026-03-19 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:47.037327 | orchestrator | 2026-03-19 01:03:47 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:03:47.038748 | orchestrator | 2026-03-19 01:03:47 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:47.039508 | orchestrator | 2026-03-19 01:03:47 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:47.040049 | orchestrator | 2026-03-19 01:03:47 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:47.040079 | orchestrator | 2026-03-19 01:03:47 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:50.069602 | orchestrator | 2026-03-19 01:03:50 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:03:50.071111 | orchestrator | 2026-03-19 01:03:50 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state STARTED 2026-03-19 01:03:50.071962 | orchestrator | 2026-03-19 01:03:50 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:50.073966 | orchestrator | 2026-03-19 01:03:50 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:50.074049 | orchestrator | 2026-03-19 01:03:50 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:53.098841 | orchestrator | 2026-03-19 01:03:53 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:03:53.100222 | orchestrator | 2026-03-19 01:03:53 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:03:53.102459 | orchestrator | 2026-03-19 01:03:53 | INFO  | Task b67834c6-679a-447d-a427-8d09255b214a is in state SUCCESS 2026-03-19 01:03:53.104344 | orchestrator | 2026-03-19 01:03:53.104399 | orchestrator | 2026-03-19 01:03:53.104406 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:03:53.104410 | orchestrator | 2026-03-19 01:03:53.104414 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 01:03:53.104418 | orchestrator | Thursday 19 March 2026 00:59:28 +0000 (0:00:00.484) 0:00:00.484 ******** 2026-03-19 01:03:53.104424 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:03:53.104429 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:03:53.104434 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:03:53.104438 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:03:53.104443 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:03:53.104447 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:03:53.104452 | orchestrator | 2026-03-19 01:03:53.104457 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 01:03:53.104462 | orchestrator | Thursday 19 March 2026 00:59:29 +0000 (0:00:00.671) 0:00:01.155 ******** 2026-03-19 01:03:53.104487 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-19 01:03:53.104492 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-19 01:03:53.104497 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-19 01:03:53.104502 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-19 01:03:53.104508 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-19 01:03:53.104514 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-19 01:03:53.104519 | orchestrator | 2026-03-19 01:03:53.104525 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-19 01:03:53.104530 | orchestrator | 2026-03-19 01:03:53.104554 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-19 01:03:53.104559 | orchestrator | Thursday 19 March 2026 00:59:30 +0000 (0:00:00.806) 0:00:01.962 ******** 2026-03-19 01:03:53.104566 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 01:03:53.104572 | orchestrator | 2026-03-19 01:03:53.104577 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-19 01:03:53.104582 | orchestrator | Thursday 19 March 2026 00:59:31 +0000 (0:00:00.889) 0:00:02.852 ******** 2026-03-19 01:03:53.104587 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:03:53.104591 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:03:53.104597 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:03:53.104617 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:03:53.104623 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:03:53.104628 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:03:53.104633 | orchestrator | 2026-03-19 01:03:53.104663 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-19 01:03:53.104668 | orchestrator | Thursday 19 March 2026 00:59:32 +0000 (0:00:01.367) 0:00:04.219 ******** 2026-03-19 01:03:53.104671 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:03:53.104674 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:03:53.104677 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:03:53.104680 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:03:53.104684 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:03:53.104687 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:03:53.104690 | orchestrator | 2026-03-19 01:03:53.104693 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-19 01:03:53.104696 | orchestrator | Thursday 19 March 2026 00:59:33 +0000 (0:00:01.094) 0:00:05.314 ******** 2026-03-19 01:03:53.104707 | orchestrator | ok: [testbed-node-0] => { 2026-03-19 01:03:53.104711 | orchestrator |  "changed": false, 2026-03-19 01:03:53.104714 | orchestrator |  "msg": "All assertions passed" 2026-03-19 01:03:53.104717 | orchestrator | } 2026-03-19 01:03:53.104721 | orchestrator | ok: [testbed-node-1] => { 2026-03-19 01:03:53.104724 | orchestrator |  "changed": false, 2026-03-19 01:03:53.104728 | orchestrator |  "msg": "All assertions passed" 2026-03-19 01:03:53.104733 | orchestrator | } 2026-03-19 01:03:53.104738 | orchestrator | ok: [testbed-node-2] => { 2026-03-19 01:03:53.104743 | orchestrator |  "changed": false, 2026-03-19 01:03:53.104804 | orchestrator |  "msg": "All assertions passed" 2026-03-19 01:03:53.104811 | orchestrator | } 2026-03-19 01:03:53.104828 | orchestrator | ok: [testbed-node-3] => { 2026-03-19 01:03:53.104834 | orchestrator |  "changed": false, 2026-03-19 01:03:53.104839 | orchestrator |  "msg": "All assertions passed" 2026-03-19 01:03:53.104845 | orchestrator | } 2026-03-19 01:03:53.104849 | orchestrator | ok: [testbed-node-4] => { 2026-03-19 01:03:53.104854 | orchestrator |  "changed": false, 2026-03-19 01:03:53.104859 | orchestrator |  "msg": "All assertions passed" 2026-03-19 01:03:53.104865 | orchestrator | } 2026-03-19 01:03:53.104870 | orchestrator | ok: [testbed-node-5] => { 2026-03-19 01:03:53.104875 | orchestrator |  "changed": false, 2026-03-19 01:03:53.104881 | orchestrator |  "msg": "All assertions passed" 2026-03-19 01:03:53.104886 | orchestrator | } 2026-03-19 01:03:53.104900 | orchestrator | 2026-03-19 01:03:53.104907 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-19 01:03:53.104912 | orchestrator | Thursday 19 March 2026 00:59:34 +0000 (0:00:00.538) 0:00:05.852 ******** 2026-03-19 01:03:53.104917 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.104922 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.104928 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.104933 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.104938 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.104944 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.104949 | orchestrator | 2026-03-19 01:03:53.104955 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-19 01:03:53.104960 | orchestrator | Thursday 19 March 2026 00:59:34 +0000 (0:00:00.569) 0:00:06.421 ******** 2026-03-19 01:03:53.104965 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-19 01:03:53.104970 | orchestrator | 2026-03-19 01:03:53.104974 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-19 01:03:53.104980 | orchestrator | Thursday 19 March 2026 00:59:38 +0000 (0:00:03.876) 0:00:10.298 ******** 2026-03-19 01:03:53.104985 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-19 01:03:53.104991 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-19 01:03:53.104997 | orchestrator | 2026-03-19 01:03:53.105026 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-19 01:03:53.105032 | orchestrator | Thursday 19 March 2026 00:59:46 +0000 (0:00:08.018) 0:00:18.317 ******** 2026-03-19 01:03:53.105038 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 01:03:53.105043 | orchestrator | 2026-03-19 01:03:53.105048 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-19 01:03:53.105054 | orchestrator | Thursday 19 March 2026 00:59:49 +0000 (0:00:03.101) 0:00:21.418 ******** 2026-03-19 01:03:53.105059 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-19 01:03:53.105066 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 01:03:53.105071 | orchestrator | 2026-03-19 01:03:53.105076 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-19 01:03:53.105081 | orchestrator | Thursday 19 March 2026 00:59:53 +0000 (0:00:03.926) 0:00:25.345 ******** 2026-03-19 01:03:53.105086 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 01:03:53.105091 | orchestrator | 2026-03-19 01:03:53.105096 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-19 01:03:53.105101 | orchestrator | Thursday 19 March 2026 00:59:57 +0000 (0:00:03.801) 0:00:29.147 ******** 2026-03-19 01:03:53.105106 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-19 01:03:53.105112 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-19 01:03:53.105117 | orchestrator | 2026-03-19 01:03:53.105123 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-19 01:03:53.105128 | orchestrator | Thursday 19 March 2026 01:00:05 +0000 (0:00:07.760) 0:00:36.908 ******** 2026-03-19 01:03:53.105134 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.105140 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.105145 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.105150 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.105156 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.105159 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.105163 | orchestrator | 2026-03-19 01:03:53.105167 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-19 01:03:53.105170 | orchestrator | Thursday 19 March 2026 01:00:05 +0000 (0:00:00.579) 0:00:37.488 ******** 2026-03-19 01:03:53.105175 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.105182 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.105191 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.105195 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.105199 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.105202 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.105206 | orchestrator | 2026-03-19 01:03:53.105209 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-19 01:03:53.105214 | orchestrator | Thursday 19 March 2026 01:00:08 +0000 (0:00:02.293) 0:00:39.782 ******** 2026-03-19 01:03:53.105221 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:03:53.105224 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:03:53.105228 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:03:53.105232 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:03:53.105235 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:03:53.105240 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:03:53.105246 | orchestrator | 2026-03-19 01:03:53.105250 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-19 01:03:53.105254 | orchestrator | Thursday 19 March 2026 01:00:10 +0000 (0:00:01.974) 0:00:41.756 ******** 2026-03-19 01:03:53.105258 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.105261 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.105265 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.105270 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.105276 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.105280 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.105283 | orchestrator | 2026-03-19 01:03:53.105287 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-19 01:03:53.105290 | orchestrator | Thursday 19 March 2026 01:00:12 +0000 (0:00:02.896) 0:00:44.653 ******** 2026-03-19 01:03:53.105303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.105317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.105321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.105332 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 01:03:53.105339 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 01:03:53.105343 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 01:03:53.105347 | orchestrator | 2026-03-19 01:03:53.105352 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-19 01:03:53.105358 | orchestrator | Thursday 19 March 2026 01:00:15 +0000 (0:00:03.005) 0:00:47.658 ******** 2026-03-19 01:03:53.105362 | orchestrator | [WARNING]: Skipped 2026-03-19 01:03:53.105366 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-19 01:03:53.105370 | orchestrator | due to this access issue: 2026-03-19 01:03:53.105374 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-19 01:03:53.105378 | orchestrator | a directory 2026-03-19 01:03:53.105382 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 01:03:53.105388 | orchestrator | 2026-03-19 01:03:53.105392 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-19 01:03:53.105398 | orchestrator | Thursday 19 March 2026 01:00:16 +0000 (0:00:00.839) 0:00:48.497 ******** 2026-03-19 01:03:53.105403 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 01:03:53.105407 | orchestrator | 2026-03-19 01:03:53.105412 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-19 01:03:53.105421 | orchestrator | Thursday 19 March 2026 01:00:18 +0000 (0:00:01.628) 0:00:50.126 ******** 2026-03-19 01:03:53.105425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.105429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.105435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.105440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 01:03:53.105452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 01:03:53.105462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 01:03:53.105468 | orchestrator | 2026-03-19 01:03:53.105474 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-19 01:03:53.105479 | orchestrator | Thursday 19 March 2026 01:00:23 +0000 (0:00:05.342) 0:00:55.468 ******** 2026-03-19 01:03:53.105485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.105490 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.105498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.105504 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.105510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.105519 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.105529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.105535 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.105541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.105546 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.105552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.105558 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.105564 | orchestrator | 2026-03-19 01:03:53.105568 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-19 01:03:53.105574 | orchestrator | Thursday 19 March 2026 01:00:26 +0000 (0:00:02.999) 0:00:58.467 ******** 2026-03-19 01:03:53.105581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.105585 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.105592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.105603 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.105608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.105614 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.105620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.105626 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.105639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.105645 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.105649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.105659 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.105662 | orchestrator | 2026-03-19 01:03:53.105666 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-19 01:03:53.105670 | orchestrator | Thursday 19 March 2026 01:00:30 +0000 (0:00:04.038) 0:01:02.506 ******** 2026-03-19 01:03:53.105674 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.105678 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.105682 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.105686 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.105689 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.105693 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.105696 | orchestrator | 2026-03-19 01:03:53.105700 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-19 01:03:53.105706 | orchestrator | Thursday 19 March 2026 01:00:34 +0000 (0:00:03.701) 0:01:06.207 ******** 2026-03-19 01:03:53.105710 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.105714 | orchestrator | 2026-03-19 01:03:53.105717 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-19 01:03:53.105722 | orchestrator | Thursday 19 March 2026 01:00:34 +0000 (0:00:00.510) 0:01:06.718 ******** 2026-03-19 01:03:53.105726 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.105730 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.105733 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.105737 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.105741 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.105744 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.105748 | orchestrator | 2026-03-19 01:03:53.105768 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-19 01:03:53.105777 | orchestrator | Thursday 19 March 2026 01:00:35 +0000 (0:00:00.904) 0:01:07.623 ******** 2026-03-19 01:03:53.105783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.105788 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.105794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.105804 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.105814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.105820 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.105825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.105831 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.106093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.106113 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.106119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.106126 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.106131 | orchestrator | 2026-03-19 01:03:53.106137 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-19 01:03:53.106142 | orchestrator | Thursday 19 March 2026 01:00:39 +0000 (0:00:03.406) 0:01:11.029 ******** 2026-03-19 01:03:53.106151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.106162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 01:03:53.106174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.106181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.106187 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 01:03:53.106193 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 01:03:53.106203 | orchestrator | 2026-03-19 01:03:53.106211 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-19 01:03:53.106217 | orchestrator | Thursday 19 March 2026 01:00:43 +0000 (0:00:04.134) 0:01:15.164 ******** 2026-03-19 01:03:53.106223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.106232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.106238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.106244 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 01:03:53.106256 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 01:03:53.106262 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 01:03:53.106267 | orchestrator | 2026-03-19 01:03:53.106273 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-19 01:03:53.106279 | orchestrator | Thursday 19 March 2026 01:00:49 +0000 (0:00:06.438) 0:01:21.603 ******** 2026-03-19 01:03:53.106288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.106294 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.106300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.106305 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.106311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.106320 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.106328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.106334 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.106340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.106346 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.106355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.106360 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.106366 | orchestrator | 2026-03-19 01:03:53.106372 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-19 01:03:53.106378 | orchestrator | Thursday 19 March 2026 01:00:51 +0000 (0:00:02.015) 0:01:23.618 ******** 2026-03-19 01:03:53.106383 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.106389 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.106393 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:03:53.106398 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.106408 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:03:53.106414 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:03:53.106419 | orchestrator | 2026-03-19 01:03:53.106425 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-19 01:03:53.106431 | orchestrator | Thursday 19 March 2026 01:00:54 +0000 (0:00:02.829) 0:01:26.447 ******** 2026-03-19 01:03:53.106437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.106443 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.106451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.106457 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.106463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.106469 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.106476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.106507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.106519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.106525 | orchestrator | 2026-03-19 01:03:53.106562 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-19 01:03:53.106569 | orchestrator | Thursday 19 March 2026 01:00:58 +0000 (0:00:03.600) 0:01:30.048 ******** 2026-03-19 01:03:53.106574 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.106577 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.106580 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.106583 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.106588 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.106593 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.106615 | orchestrator | 2026-03-19 01:03:53.106624 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-19 01:03:53.106628 | orchestrator | Thursday 19 March 2026 01:01:00 +0000 (0:00:01.899) 0:01:31.948 ******** 2026-03-19 01:03:53.106631 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.106634 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.106637 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.106640 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.106643 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.106647 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.106650 | orchestrator | 2026-03-19 01:03:53.106653 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-19 01:03:53.106656 | orchestrator | Thursday 19 March 2026 01:01:02 +0000 (0:00:02.010) 0:01:33.958 ******** 2026-03-19 01:03:53.106659 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.106662 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.106665 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.106669 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.106672 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.106675 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.106678 | orchestrator | 2026-03-19 01:03:53.106681 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-19 01:03:53.106684 | orchestrator | Thursday 19 March 2026 01:01:03 +0000 (0:00:01.754) 0:01:35.712 ******** 2026-03-19 01:03:53.106687 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.106691 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.106694 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.106697 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.106700 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.106706 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.106709 | orchestrator | 2026-03-19 01:03:53.106713 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-19 01:03:53.106716 | orchestrator | Thursday 19 March 2026 01:01:05 +0000 (0:00:01.934) 0:01:37.646 ******** 2026-03-19 01:03:53.106719 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.106722 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.106726 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.106729 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.106736 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.106750 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.106770 | orchestrator | 2026-03-19 01:03:53.106776 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-19 01:03:53.106780 | orchestrator | Thursday 19 March 2026 01:01:08 +0000 (0:00:02.320) 0:01:39.967 ******** 2026-03-19 01:03:53.106785 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.106791 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.106797 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.106800 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.106803 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.106806 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.106809 | orchestrator | 2026-03-19 01:03:53.106813 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-19 01:03:53.106816 | orchestrator | Thursday 19 March 2026 01:01:09 +0000 (0:00:01.763) 0:01:41.731 ******** 2026-03-19 01:03:53.106819 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-19 01:03:53.106824 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.106829 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-19 01:03:53.106833 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.106842 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-19 01:03:53.106848 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.106853 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-19 01:03:53.106858 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.106863 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-19 01:03:53.106868 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.106873 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-19 01:03:53.106876 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.106879 | orchestrator | 2026-03-19 01:03:53.106883 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-19 01:03:53.106886 | orchestrator | Thursday 19 March 2026 01:01:11 +0000 (0:00:01.852) 0:01:43.583 ******** 2026-03-19 01:03:53.106889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.106893 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.106900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.106908 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.106915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.106918 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.106921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.106925 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.106928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.106931 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.106935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.106940 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.106944 | orchestrator | 2026-03-19 01:03:53.106949 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-19 01:03:53.106953 | orchestrator | Thursday 19 March 2026 01:01:14 +0000 (0:00:02.519) 0:01:46.103 ******** 2026-03-19 01:03:53.106956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.106959 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.107111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.107118 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.107121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.107125 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.107128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.107135 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.107143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.107148 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.107154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.107158 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.107163 | orchestrator | 2026-03-19 01:03:53.107168 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-19 01:03:53.107173 | orchestrator | Thursday 19 March 2026 01:01:16 +0000 (0:00:01.912) 0:01:48.015 ******** 2026-03-19 01:03:53.107178 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.107186 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.107191 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.107196 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.107202 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.107207 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.107213 | orchestrator | 2026-03-19 01:03:53.107217 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-19 01:03:53.107220 | orchestrator | Thursday 19 March 2026 01:01:18 +0000 (0:00:01.877) 0:01:49.892 ******** 2026-03-19 01:03:53.107223 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.107226 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.107229 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.107232 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:03:53.107235 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:03:53.107238 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:03:53.107241 | orchestrator | 2026-03-19 01:03:53.107245 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-19 01:03:53.107248 | orchestrator | Thursday 19 March 2026 01:01:21 +0000 (0:00:03.210) 0:01:53.103 ******** 2026-03-19 01:03:53.107251 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.107254 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.107257 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.107260 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.107263 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.107267 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.107270 | orchestrator | 2026-03-19 01:03:53.107273 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-19 01:03:53.107280 | orchestrator | Thursday 19 March 2026 01:01:24 +0000 (0:00:03.491) 0:01:56.595 ******** 2026-03-19 01:03:53.107283 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.107286 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.107289 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.107292 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.107295 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.107298 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.107302 | orchestrator | 2026-03-19 01:03:53.107305 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-19 01:03:53.107308 | orchestrator | Thursday 19 March 2026 01:01:26 +0000 (0:00:01.920) 0:01:58.515 ******** 2026-03-19 01:03:53.107311 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.107314 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.107318 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.107321 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.107324 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.107327 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.107330 | orchestrator | 2026-03-19 01:03:53.107333 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-19 01:03:53.107336 | orchestrator | Thursday 19 March 2026 01:01:28 +0000 (0:00:01.771) 0:02:00.287 ******** 2026-03-19 01:03:53.107340 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.107343 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.107346 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.107349 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.107352 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.107355 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.107358 | orchestrator | 2026-03-19 01:03:53.107361 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-19 01:03:53.107365 | orchestrator | Thursday 19 March 2026 01:01:30 +0000 (0:00:01.539) 0:02:01.826 ******** 2026-03-19 01:03:53.107368 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.107371 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.107374 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.107377 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.107380 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.107383 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.107386 | orchestrator | 2026-03-19 01:03:53.107389 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-19 01:03:53.107394 | orchestrator | Thursday 19 March 2026 01:01:31 +0000 (0:00:01.823) 0:02:03.649 ******** 2026-03-19 01:03:53.107403 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.107411 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.107417 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.107422 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.107427 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.107432 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.107437 | orchestrator | 2026-03-19 01:03:53.107442 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-19 01:03:53.107446 | orchestrator | Thursday 19 March 2026 01:01:33 +0000 (0:00:01.563) 0:02:05.212 ******** 2026-03-19 01:03:53.107451 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.107456 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.107461 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.107466 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.107471 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.107476 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.107480 | orchestrator | 2026-03-19 01:03:53.107486 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-19 01:03:53.107491 | orchestrator | Thursday 19 March 2026 01:01:36 +0000 (0:00:02.607) 0:02:07.820 ******** 2026-03-19 01:03:53.107496 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-19 01:03:53.107508 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.107512 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-19 01:03:53.107515 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.107518 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-19 01:03:53.107521 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.107524 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-19 01:03:53.107528 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.107536 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-19 01:03:53.107541 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.107547 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-19 01:03:53.107552 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.107557 | orchestrator | 2026-03-19 01:03:53.107562 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-19 01:03:53.107568 | orchestrator | Thursday 19 March 2026 01:01:38 +0000 (0:00:02.300) 0:02:10.121 ******** 2026-03-19 01:03:53.107574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.107580 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.107585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.107591 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.107599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.107603 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.107610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.107613 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.107620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 01:03:53.107624 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.107629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 01:03:53.107634 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.107640 | orchestrator | 2026-03-19 01:03:53.107645 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-19 01:03:53.107650 | orchestrator | Thursday 19 March 2026 01:01:40 +0000 (0:00:02.061) 0:02:12.182 ******** 2026-03-19 01:03:53.107656 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 01:03:53.107665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.107678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.107684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 01:03:53.107690 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 01:03:53.107695 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 01:03:53.107700 | orchestrator | 2026-03-19 01:03:53.107705 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-19 01:03:53.107721 | orchestrator | Thursday 19 March 2026 01:01:42 +0000 (0:00:02.135) 0:02:14.318 ******** 2026-03-19 01:03:53.107726 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:03:53.107731 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:03:53.107739 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:03:53.107745 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:03:53.107750 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:03:53.107792 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:03:53.107797 | orchestrator | 2026-03-19 01:03:53.107802 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-19 01:03:53.107808 | orchestrator | Thursday 19 March 2026 01:01:43 +0000 (0:00:00.511) 0:02:14.829 ******** 2026-03-19 01:03:53.107813 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:03:53.107818 | orchestrator | 2026-03-19 01:03:53.107823 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-19 01:03:53.107828 | orchestrator | Thursday 19 March 2026 01:01:45 +0000 (0:00:02.390) 0:02:17.220 ******** 2026-03-19 01:03:53.107833 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:03:53.107838 | orchestrator | 2026-03-19 01:03:53.107843 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-19 01:03:53.107849 | orchestrator | Thursday 19 March 2026 01:01:48 +0000 (0:00:02.677) 0:02:19.898 ******** 2026-03-19 01:03:53.107854 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:03:53.107859 | orchestrator | 2026-03-19 01:03:53.107864 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-19 01:03:53.107869 | orchestrator | Thursday 19 March 2026 01:02:28 +0000 (0:00:40.531) 0:03:00.429 ******** 2026-03-19 01:03:53.107874 | orchestrator | 2026-03-19 01:03:53.107879 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-19 01:03:53.107884 | orchestrator | Thursday 19 March 2026 01:02:28 +0000 (0:00:00.214) 0:03:00.644 ******** 2026-03-19 01:03:53.107889 | orchestrator | 2026-03-19 01:03:53.107894 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-19 01:03:53.107899 | orchestrator | Thursday 19 March 2026 01:02:29 +0000 (0:00:00.172) 0:03:00.817 ******** 2026-03-19 01:03:53.107904 | orchestrator | 2026-03-19 01:03:53.107909 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-19 01:03:53.107914 | orchestrator | Thursday 19 March 2026 01:02:29 +0000 (0:00:00.142) 0:03:00.959 ******** 2026-03-19 01:03:53.107919 | orchestrator | 2026-03-19 01:03:53.107932 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-19 01:03:53.107937 | orchestrator | Thursday 19 March 2026 01:02:29 +0000 (0:00:00.094) 0:03:01.054 ******** 2026-03-19 01:03:53.107943 | orchestrator | 2026-03-19 01:03:53.107948 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-19 01:03:53.107954 | orchestrator | Thursday 19 March 2026 01:02:29 +0000 (0:00:00.092) 0:03:01.146 ******** 2026-03-19 01:03:53.107959 | orchestrator | 2026-03-19 01:03:53.107964 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-19 01:03:53.107970 | orchestrator | Thursday 19 March 2026 01:02:29 +0000 (0:00:00.061) 0:03:01.208 ******** 2026-03-19 01:03:53.107976 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:03:53.107981 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:03:53.107986 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:03:53.107992 | orchestrator | 2026-03-19 01:03:53.107997 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-19 01:03:53.108002 | orchestrator | Thursday 19 March 2026 01:02:56 +0000 (0:00:26.851) 0:03:28.060 ******** 2026-03-19 01:03:53.108007 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:03:53.108012 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:03:53.108017 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:03:53.108022 | orchestrator | 2026-03-19 01:03:53.108028 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:03:53.108033 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-19 01:03:53.108044 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-19 01:03:53.108050 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-19 01:03:53.108055 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-19 01:03:53.108060 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-19 01:03:53.108065 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-19 01:03:53.108070 | orchestrator | 2026-03-19 01:03:53.108076 | orchestrator | 2026-03-19 01:03:53.108081 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:03:53.108086 | orchestrator | Thursday 19 March 2026 01:03:50 +0000 (0:00:54.095) 0:04:22.155 ******** 2026-03-19 01:03:53.108092 | orchestrator | =============================================================================== 2026-03-19 01:03:53.108098 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 54.09s 2026-03-19 01:03:53.108103 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.53s 2026-03-19 01:03:53.108109 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.85s 2026-03-19 01:03:53.108115 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 8.02s 2026-03-19 01:03:53.108121 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.76s 2026-03-19 01:03:53.108126 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.44s 2026-03-19 01:03:53.108136 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.34s 2026-03-19 01:03:53.108141 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.13s 2026-03-19 01:03:53.108146 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.04s 2026-03-19 01:03:53.108152 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.93s 2026-03-19 01:03:53.108158 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.88s 2026-03-19 01:03:53.108165 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.80s 2026-03-19 01:03:53.108171 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.70s 2026-03-19 01:03:53.108176 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.60s 2026-03-19 01:03:53.108181 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.49s 2026-03-19 01:03:53.108187 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.41s 2026-03-19 01:03:53.108192 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.21s 2026-03-19 01:03:53.108197 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.10s 2026-03-19 01:03:53.108202 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.01s 2026-03-19 01:03:53.108207 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.00s 2026-03-19 01:03:53.108213 | orchestrator | 2026-03-19 01:03:53 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:53.108218 | orchestrator | 2026-03-19 01:03:53 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:53.108224 | orchestrator | 2026-03-19 01:03:53 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:56.134698 | orchestrator | 2026-03-19 01:03:56 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:03:56.134979 | orchestrator | 2026-03-19 01:03:56 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:03:56.136238 | orchestrator | 2026-03-19 01:03:56 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:56.137324 | orchestrator | 2026-03-19 01:03:56 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:56.137352 | orchestrator | 2026-03-19 01:03:56 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:03:59.172611 | orchestrator | 2026-03-19 01:03:59 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:03:59.173421 | orchestrator | 2026-03-19 01:03:59 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:03:59.174519 | orchestrator | 2026-03-19 01:03:59 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:03:59.175150 | orchestrator | 2026-03-19 01:03:59 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:03:59.175231 | orchestrator | 2026-03-19 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:02.205350 | orchestrator | 2026-03-19 01:04:02 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:02.205456 | orchestrator | 2026-03-19 01:04:02 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:02.206336 | orchestrator | 2026-03-19 01:04:02 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:04:02.207085 | orchestrator | 2026-03-19 01:04:02 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:02.207134 | orchestrator | 2026-03-19 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:05.237153 | orchestrator | 2026-03-19 01:04:05 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:05.237327 | orchestrator | 2026-03-19 01:04:05 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:05.240039 | orchestrator | 2026-03-19 01:04:05 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:04:05.240186 | orchestrator | 2026-03-19 01:04:05 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:05.240259 | orchestrator | 2026-03-19 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:08.270146 | orchestrator | 2026-03-19 01:04:08 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:08.273058 | orchestrator | 2026-03-19 01:04:08 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:08.276672 | orchestrator | 2026-03-19 01:04:08 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:04:08.276939 | orchestrator | 2026-03-19 01:04:08 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:08.277089 | orchestrator | 2026-03-19 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:11.299833 | orchestrator | 2026-03-19 01:04:11 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:11.300335 | orchestrator | 2026-03-19 01:04:11 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:11.301087 | orchestrator | 2026-03-19 01:04:11 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:04:11.301738 | orchestrator | 2026-03-19 01:04:11 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:11.301858 | orchestrator | 2026-03-19 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:14.325129 | orchestrator | 2026-03-19 01:04:14 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:14.325571 | orchestrator | 2026-03-19 01:04:14 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:14.326323 | orchestrator | 2026-03-19 01:04:14 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:04:14.326872 | orchestrator | 2026-03-19 01:04:14 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:14.326897 | orchestrator | 2026-03-19 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:17.347404 | orchestrator | 2026-03-19 01:04:17 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:17.348691 | orchestrator | 2026-03-19 01:04:17 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:17.351375 | orchestrator | 2026-03-19 01:04:17 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:04:17.353014 | orchestrator | 2026-03-19 01:04:17 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:17.353061 | orchestrator | 2026-03-19 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:20.374177 | orchestrator | 2026-03-19 01:04:20 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:20.374584 | orchestrator | 2026-03-19 01:04:20 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:20.375336 | orchestrator | 2026-03-19 01:04:20 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:04:20.377817 | orchestrator | 2026-03-19 01:04:20 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:20.377879 | orchestrator | 2026-03-19 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:23.404277 | orchestrator | 2026-03-19 01:04:23 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:23.404787 | orchestrator | 2026-03-19 01:04:23 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:23.406556 | orchestrator | 2026-03-19 01:04:23 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:04:23.407216 | orchestrator | 2026-03-19 01:04:23 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:23.407239 | orchestrator | 2026-03-19 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:26.441851 | orchestrator | 2026-03-19 01:04:26 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:26.441942 | orchestrator | 2026-03-19 01:04:26 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:26.442676 | orchestrator | 2026-03-19 01:04:26 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:04:26.443349 | orchestrator | 2026-03-19 01:04:26 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:26.443373 | orchestrator | 2026-03-19 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:29.479719 | orchestrator | 2026-03-19 01:04:29 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:29.479779 | orchestrator | 2026-03-19 01:04:29 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:29.480374 | orchestrator | 2026-03-19 01:04:29 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:04:29.481096 | orchestrator | 2026-03-19 01:04:29 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:29.481127 | orchestrator | 2026-03-19 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:32.521303 | orchestrator | 2026-03-19 01:04:32 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:32.521369 | orchestrator | 2026-03-19 01:04:32 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:32.521375 | orchestrator | 2026-03-19 01:04:32 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:04:32.521379 | orchestrator | 2026-03-19 01:04:32 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:32.521382 | orchestrator | 2026-03-19 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:35.556661 | orchestrator | 2026-03-19 01:04:35 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:35.557867 | orchestrator | 2026-03-19 01:04:35 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:35.558697 | orchestrator | 2026-03-19 01:04:35 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state STARTED 2026-03-19 01:04:35.559876 | orchestrator | 2026-03-19 01:04:35 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:35.559922 | orchestrator | 2026-03-19 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:38.586086 | orchestrator | 2026-03-19 01:04:38 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:38.588324 | orchestrator | 2026-03-19 01:04:38 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:38.590722 | orchestrator | 2026-03-19 01:04:38 | INFO  | Task 33759bfa-5f30-45d6-bf27-66966fe1bfbf is in state SUCCESS 2026-03-19 01:04:38.591837 | orchestrator | 2026-03-19 01:04:38.591881 | orchestrator | 2026-03-19 01:04:38.591888 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:04:38.591894 | orchestrator | 2026-03-19 01:04:38.591942 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 01:04:38.591951 | orchestrator | Thursday 19 March 2026 01:03:09 +0000 (0:00:00.283) 0:00:00.284 ******** 2026-03-19 01:04:38.591957 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:04:38.591963 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:04:38.591969 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:04:38.591975 | orchestrator | 2026-03-19 01:04:38.591981 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 01:04:38.591986 | orchestrator | Thursday 19 March 2026 01:03:09 +0000 (0:00:00.246) 0:00:00.530 ******** 2026-03-19 01:04:38.591992 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-19 01:04:38.591998 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-19 01:04:38.592004 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-19 01:04:38.592010 | orchestrator | 2026-03-19 01:04:38.592016 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-19 01:04:38.592021 | orchestrator | 2026-03-19 01:04:38.592027 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-19 01:04:38.592032 | orchestrator | Thursday 19 March 2026 01:03:09 +0000 (0:00:00.272) 0:00:00.803 ******** 2026-03-19 01:04:38.592038 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:04:38.592044 | orchestrator | 2026-03-19 01:04:38.592050 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-19 01:04:38.592114 | orchestrator | Thursday 19 March 2026 01:03:10 +0000 (0:00:00.589) 0:00:01.392 ******** 2026-03-19 01:04:38.592122 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-19 01:04:38.592127 | orchestrator | 2026-03-19 01:04:38.592133 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-19 01:04:38.592139 | orchestrator | Thursday 19 March 2026 01:03:13 +0000 (0:00:03.680) 0:00:05.072 ******** 2026-03-19 01:04:38.592145 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-19 01:04:38.592150 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-19 01:04:38.592156 | orchestrator | 2026-03-19 01:04:38.592162 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-19 01:04:38.592168 | orchestrator | Thursday 19 March 2026 01:03:20 +0000 (0:00:06.200) 0:00:11.273 ******** 2026-03-19 01:04:38.592173 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 01:04:38.592185 | orchestrator | 2026-03-19 01:04:38.592191 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-19 01:04:38.592196 | orchestrator | Thursday 19 March 2026 01:03:23 +0000 (0:00:03.070) 0:00:14.343 ******** 2026-03-19 01:04:38.592201 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-19 01:04:38.592207 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 01:04:38.592212 | orchestrator | 2026-03-19 01:04:38.592222 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-19 01:04:38.592228 | orchestrator | Thursday 19 March 2026 01:03:26 +0000 (0:00:03.739) 0:00:18.083 ******** 2026-03-19 01:04:38.592233 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 01:04:38.592238 | orchestrator | 2026-03-19 01:04:38.592243 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-19 01:04:38.592259 | orchestrator | Thursday 19 March 2026 01:03:29 +0000 (0:00:02.814) 0:00:20.898 ******** 2026-03-19 01:04:38.592264 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-19 01:04:38.592269 | orchestrator | 2026-03-19 01:04:38.592275 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-19 01:04:38.592280 | orchestrator | Thursday 19 March 2026 01:03:33 +0000 (0:00:03.507) 0:00:24.405 ******** 2026-03-19 01:04:38.592300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 01:04:38.592313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 01:04:38.592322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 01:04:38.592328 | orchestrator | 2026-03-19 01:04:38.592334 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-19 01:04:38.592339 | orchestrator | Thursday 19 March 2026 01:03:37 +0000 (0:00:03.994) 0:00:28.399 ******** 2026-03-19 01:04:38.592345 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:04:38.592350 | orchestrator | 2026-03-19 01:04:38.592356 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-19 01:04:38.592364 | orchestrator | Thursday 19 March 2026 01:03:37 +0000 (0:00:00.558) 0:00:28.958 ******** 2026-03-19 01:04:38.592372 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:04:38.592378 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:04:38.592383 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:04:38.592388 | orchestrator | 2026-03-19 01:04:38.592393 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-19 01:04:38.592398 | orchestrator | Thursday 19 March 2026 01:03:42 +0000 (0:00:04.593) 0:00:33.551 ******** 2026-03-19 01:04:38.592403 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 01:04:38.592409 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 01:04:38.592415 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 01:04:38.592420 | orchestrator | 2026-03-19 01:04:38.592426 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-19 01:04:38.592431 | orchestrator | Thursday 19 March 2026 01:03:44 +0000 (0:00:01.734) 0:00:35.286 ******** 2026-03-19 01:04:38.592436 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 01:04:38.592441 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 01:04:38.592447 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 01:04:38.592452 | orchestrator | 2026-03-19 01:04:38.592458 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-19 01:04:38.592463 | orchestrator | Thursday 19 March 2026 01:03:45 +0000 (0:00:01.094) 0:00:36.381 ******** 2026-03-19 01:04:38.592469 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:04:38.592474 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:04:38.592479 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:04:38.592484 | orchestrator | 2026-03-19 01:04:38.592491 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-19 01:04:38.592496 | orchestrator | Thursday 19 March 2026 01:03:45 +0000 (0:00:00.597) 0:00:36.978 ******** 2026-03-19 01:04:38.592502 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:04:38.592507 | orchestrator | 2026-03-19 01:04:38.592513 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-19 01:04:38.592519 | orchestrator | Thursday 19 March 2026 01:03:46 +0000 (0:00:00.182) 0:00:37.161 ******** 2026-03-19 01:04:38.592524 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:04:38.592530 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:04:38.592535 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:04:38.592541 | orchestrator | 2026-03-19 01:04:38.592546 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-19 01:04:38.592552 | orchestrator | Thursday 19 March 2026 01:03:46 +0000 (0:00:00.494) 0:00:37.656 ******** 2026-03-19 01:04:38.592557 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:04:38.592563 | orchestrator | 2026-03-19 01:04:38.592568 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-19 01:04:38.592573 | orchestrator | Thursday 19 March 2026 01:03:47 +0000 (0:00:00.640) 0:00:38.296 ******** 2026-03-19 01:04:38.592582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 01:04:38.592596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 01:04:38.592606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 01:04:38.592616 | orchestrator | 2026-03-19 01:04:38.592621 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-19 01:04:38.592627 | orchestrator | Thursday 19 March 2026 01:03:50 +0000 (0:00:03.606) 0:00:41.902 ******** 2026-03-19 01:04:38.592637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 01:04:38.592643 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:04:38.592651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 01:04:38.592660 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:04:38.592681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 01:04:38.592687 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:04:38.592692 | orchestrator | 2026-03-19 01:04:38.592697 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-19 01:04:38.592703 | orchestrator | Thursday 19 March 2026 01:03:53 +0000 (0:00:02.986) 0:00:44.889 ******** 2026-03-19 01:04:38.592708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 01:04:38.592714 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:04:38.592723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 01:04:38.592737 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:04:38.592748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 01:04:38.592755 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:04:38.592760 | orchestrator | 2026-03-19 01:04:38.592765 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-19 01:04:38.592771 | orchestrator | Thursday 19 March 2026 01:03:57 +0000 (0:00:03.483) 0:00:48.373 ******** 2026-03-19 01:04:38.592777 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:04:38.592782 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:04:38.592787 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:04:38.592793 | orchestrator | 2026-03-19 01:04:38.592798 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-19 01:04:38.592804 | orchestrator | Thursday 19 March 2026 01:04:01 +0000 (0:00:04.437) 0:00:52.810 ******** 2026-03-19 01:04:38.592813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 01:04:38.592827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 01:04:38.592836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 01:04:38.592846 | orchestrator | 2026-03-19 01:04:38.592851 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-19 01:04:38.592856 | orchestrator | Thursday 19 March 2026 01:04:06 +0000 (0:00:04.421) 0:00:57.232 ******** 2026-03-19 01:04:38.592862 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:04:38.592868 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:04:38.592872 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:04:38.592878 | orchestrator | 2026-03-19 01:04:38.592884 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-19 01:04:38.592889 | orchestrator | Thursday 19 March 2026 01:04:12 +0000 (0:00:06.617) 0:01:03.850 ******** 2026-03-19 01:04:38.592894 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:04:38.592899 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:04:38.592904 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:04:38.592908 | orchestrator | 2026-03-19 01:04:38.592914 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-19 01:04:38.592919 | orchestrator | Thursday 19 March 2026 01:04:17 +0000 (0:00:05.113) 0:01:08.964 ******** 2026-03-19 01:04:38.592924 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:04:38.592930 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:04:38.592936 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:04:38.592941 | orchestrator | 2026-03-19 01:04:38.592947 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-19 01:04:38.592951 | orchestrator | Thursday 19 March 2026 01:04:21 +0000 (0:00:03.425) 0:01:12.389 ******** 2026-03-19 01:04:38.592957 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:04:38.592962 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:04:38.592970 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:04:38.592975 | orchestrator | 2026-03-19 01:04:38.592981 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-19 01:04:38.592986 | orchestrator | Thursday 19 March 2026 01:04:24 +0000 (0:00:03.359) 0:01:15.749 ******** 2026-03-19 01:04:38.592991 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:04:38.592996 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:04:38.593001 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:04:38.593006 | orchestrator | 2026-03-19 01:04:38.593011 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-19 01:04:38.593017 | orchestrator | Thursday 19 March 2026 01:04:28 +0000 (0:00:03.887) 0:01:19.637 ******** 2026-03-19 01:04:38.593022 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:04:38.593028 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:04:38.593034 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:04:38.593041 | orchestrator | 2026-03-19 01:04:38.593046 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-19 01:04:38.593052 | orchestrator | Thursday 19 March 2026 01:04:28 +0000 (0:00:00.399) 0:01:20.036 ******** 2026-03-19 01:04:38.593058 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-19 01:04:38.593065 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:04:38.593075 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-19 01:04:38.593081 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:04:38.593087 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-19 01:04:38.593093 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:04:38.593098 | orchestrator | 2026-03-19 01:04:38.593104 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-19 01:04:38.593111 | orchestrator | Thursday 19 March 2026 01:04:33 +0000 (0:00:04.638) 0:01:24.674 ******** 2026-03-19 01:04:38.593117 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"msg": "The conditional check 'glance_backend_nvme | default(false) | bool)' failed. The error was: template error while templating string: unexpected ')'. String: {% if glance_backend_nvme | default(false) | bool) %} True {% else %} False {% endif %}. unexpected ')'\n\nThe error appears to be in '/ansible/roles/glance/tasks/config.yml': line 172, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generating 'hostnqn' file for glance_api\n ^ here\n"} 2026-03-19 01:04:38.593127 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"msg": "The conditional check 'glance_backend_nvme | default(false) | bool)' failed. The error was: template error while templating string: unexpected ')'. String: {% if glance_backend_nvme | default(false) | bool) %} True {% else %} False {% endif %}. unexpected ')'\n\nThe error appears to be in '/ansible/roles/glance/tasks/config.yml': line 172, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generating 'hostnqn' file for glance_api\n ^ here\n"} 2026-03-19 01:04:38.593133 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"msg": "The conditional check 'glance_backend_nvme | default(false) | bool)' failed. The error was: template error while templating string: unexpected ')'. String: {% if glance_backend_nvme | default(false) | bool) %} True {% else %} False {% endif %}. unexpected ')'\n\nThe error appears to be in '/ansible/roles/glance/tasks/config.yml': line 172, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generating 'hostnqn' file for glance_api\n ^ here\n"} 2026-03-19 01:04:38.593139 | orchestrator | 2026-03-19 01:04:38.593144 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:04:38.593150 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=1  skipped=11  rescued=0 ignored=0 2026-03-19 01:04:38.593157 | orchestrator | testbed-node-1 : ok=13  changed=7  unreachable=0 failed=1  skipped=10  rescued=0 ignored=0 2026-03-19 01:04:38.593162 | orchestrator | testbed-node-2 : ok=13  changed=7  unreachable=0 failed=1  skipped=10  rescued=0 ignored=0 2026-03-19 01:04:38.593167 | orchestrator | 2026-03-19 01:04:38.593173 | orchestrator | 2026-03-19 01:04:38.593178 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:04:38.593184 | orchestrator | Thursday 19 March 2026 01:04:37 +0000 (0:00:03.630) 0:01:28.305 ******** 2026-03-19 01:04:38.593189 | orchestrator | =============================================================================== 2026-03-19 01:04:38.593194 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.62s 2026-03-19 01:04:38.593199 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.20s 2026-03-19 01:04:38.593242 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.11s 2026-03-19 01:04:38.593249 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.64s 2026-03-19 01:04:38.593258 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.59s 2026-03-19 01:04:38.593264 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.44s 2026-03-19 01:04:38.593270 | orchestrator | glance : Copying over config.json files for services -------------------- 4.42s 2026-03-19 01:04:38.593275 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.99s 2026-03-19 01:04:38.593280 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.89s 2026-03-19 01:04:38.593286 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.74s 2026-03-19 01:04:38.593302 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.68s 2026-03-19 01:04:38.593308 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.63s 2026-03-19 01:04:38.593313 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.61s 2026-03-19 01:04:38.593318 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.51s 2026-03-19 01:04:38.593323 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.48s 2026-03-19 01:04:38.593329 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.43s 2026-03-19 01:04:38.593334 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.36s 2026-03-19 01:04:38.593339 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.07s 2026-03-19 01:04:38.593344 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 2.99s 2026-03-19 01:04:38.593349 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 2.81s 2026-03-19 01:04:38.593446 | orchestrator | 2026-03-19 01:04:38 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:04:38.593453 | orchestrator | 2026-03-19 01:04:38 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:38.593458 | orchestrator | 2026-03-19 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:41.631971 | orchestrator | 2026-03-19 01:04:41 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:41.634205 | orchestrator | 2026-03-19 01:04:41 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:41.635911 | orchestrator | 2026-03-19 01:04:41 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:04:41.638187 | orchestrator | 2026-03-19 01:04:41 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:41.638252 | orchestrator | 2026-03-19 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:44.692218 | orchestrator | 2026-03-19 01:04:44 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:44.692595 | orchestrator | 2026-03-19 01:04:44 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:44.693331 | orchestrator | 2026-03-19 01:04:44 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:04:44.694114 | orchestrator | 2026-03-19 01:04:44 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:44.694141 | orchestrator | 2026-03-19 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:47.725197 | orchestrator | 2026-03-19 01:04:47 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:47.727248 | orchestrator | 2026-03-19 01:04:47 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:47.727312 | orchestrator | 2026-03-19 01:04:47 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:04:47.729173 | orchestrator | 2026-03-19 01:04:47 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:47.729256 | orchestrator | 2026-03-19 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:50.758791 | orchestrator | 2026-03-19 01:04:50 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:50.760607 | orchestrator | 2026-03-19 01:04:50 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:50.762454 | orchestrator | 2026-03-19 01:04:50 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:04:50.764154 | orchestrator | 2026-03-19 01:04:50 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:50.764357 | orchestrator | 2026-03-19 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:53.811255 | orchestrator | 2026-03-19 01:04:53 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:53.814497 | orchestrator | 2026-03-19 01:04:53 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:53.817547 | orchestrator | 2026-03-19 01:04:53 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:04:53.820597 | orchestrator | 2026-03-19 01:04:53 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:53.820771 | orchestrator | 2026-03-19 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:56.875692 | orchestrator | 2026-03-19 01:04:56 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:56.877125 | orchestrator | 2026-03-19 01:04:56 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:56.878929 | orchestrator | 2026-03-19 01:04:56 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:04:56.880130 | orchestrator | 2026-03-19 01:04:56 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:56.880154 | orchestrator | 2026-03-19 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:04:59.925954 | orchestrator | 2026-03-19 01:04:59 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:04:59.929059 | orchestrator | 2026-03-19 01:04:59 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:04:59.931832 | orchestrator | 2026-03-19 01:04:59 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:04:59.934140 | orchestrator | 2026-03-19 01:04:59 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:04:59.934197 | orchestrator | 2026-03-19 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:02.988890 | orchestrator | 2026-03-19 01:05:02 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:02.991106 | orchestrator | 2026-03-19 01:05:02 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:02.994320 | orchestrator | 2026-03-19 01:05:02 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:02.995559 | orchestrator | 2026-03-19 01:05:02 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:05:02.995959 | orchestrator | 2026-03-19 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:06.041222 | orchestrator | 2026-03-19 01:05:06 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:06.041971 | orchestrator | 2026-03-19 01:05:06 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:06.045953 | orchestrator | 2026-03-19 01:05:06 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:06.046006 | orchestrator | 2026-03-19 01:05:06 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:05:06.046043 | orchestrator | 2026-03-19 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:09.085242 | orchestrator | 2026-03-19 01:05:09 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:09.086412 | orchestrator | 2026-03-19 01:05:09 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:09.087933 | orchestrator | 2026-03-19 01:05:09 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:09.089821 | orchestrator | 2026-03-19 01:05:09 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:05:09.090074 | orchestrator | 2026-03-19 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:12.124135 | orchestrator | 2026-03-19 01:05:12 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:12.124290 | orchestrator | 2026-03-19 01:05:12 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:12.125042 | orchestrator | 2026-03-19 01:05:12 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:12.125770 | orchestrator | 2026-03-19 01:05:12 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:05:12.126776 | orchestrator | 2026-03-19 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:15.163248 | orchestrator | 2026-03-19 01:05:15 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:15.164308 | orchestrator | 2026-03-19 01:05:15 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:15.165631 | orchestrator | 2026-03-19 01:05:15 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:15.166380 | orchestrator | 2026-03-19 01:05:15 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:05:15.166523 | orchestrator | 2026-03-19 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:18.209890 | orchestrator | 2026-03-19 01:05:18 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:18.211793 | orchestrator | 2026-03-19 01:05:18 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:18.213433 | orchestrator | 2026-03-19 01:05:18 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:18.214725 | orchestrator | 2026-03-19 01:05:18 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:05:18.214839 | orchestrator | 2026-03-19 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:21.261995 | orchestrator | 2026-03-19 01:05:21 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:21.263384 | orchestrator | 2026-03-19 01:05:21 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:21.266150 | orchestrator | 2026-03-19 01:05:21 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:21.269453 | orchestrator | 2026-03-19 01:05:21 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:05:21.270695 | orchestrator | 2026-03-19 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:24.303993 | orchestrator | 2026-03-19 01:05:24 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:24.307265 | orchestrator | 2026-03-19 01:05:24 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:24.307374 | orchestrator | 2026-03-19 01:05:24 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:24.308699 | orchestrator | 2026-03-19 01:05:24 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:05:24.308776 | orchestrator | 2026-03-19 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:27.352729 | orchestrator | 2026-03-19 01:05:27 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:27.355153 | orchestrator | 2026-03-19 01:05:27 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:27.357236 | orchestrator | 2026-03-19 01:05:27 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:27.359708 | orchestrator | 2026-03-19 01:05:27 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:05:27.359770 | orchestrator | 2026-03-19 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:30.405363 | orchestrator | 2026-03-19 01:05:30 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:30.405414 | orchestrator | 2026-03-19 01:05:30 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:30.405419 | orchestrator | 2026-03-19 01:05:30 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:30.405423 | orchestrator | 2026-03-19 01:05:30 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state STARTED 2026-03-19 01:05:30.405426 | orchestrator | 2026-03-19 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:33.444469 | orchestrator | 2026-03-19 01:05:33 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:33.446638 | orchestrator | 2026-03-19 01:05:33 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:05:33.450945 | orchestrator | 2026-03-19 01:05:33 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:33.451002 | orchestrator | 2026-03-19 01:05:33 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:33.453853 | orchestrator | 2026-03-19 01:05:33 | INFO  | Task 15a7a2af-1cf2-45cf-8103-a16fcc353aa7 is in state SUCCESS 2026-03-19 01:05:33.455226 | orchestrator | 2026-03-19 01:05:33.455286 | orchestrator | 2026-03-19 01:05:33.455304 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:05:33.455311 | orchestrator | 2026-03-19 01:05:33.455317 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 01:05:33.455323 | orchestrator | Thursday 19 March 2026 01:02:24 +0000 (0:00:00.307) 0:00:00.307 ******** 2026-03-19 01:05:33.455329 | orchestrator | ok: [testbed-manager] 2026-03-19 01:05:33.455344 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:05:33.455350 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:05:33.455361 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:05:33.455366 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:05:33.455371 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:05:33.455376 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:05:33.455395 | orchestrator | 2026-03-19 01:05:33.455401 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 01:05:33.455406 | orchestrator | Thursday 19 March 2026 01:02:25 +0000 (0:00:00.786) 0:00:01.094 ******** 2026-03-19 01:05:33.455412 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-19 01:05:33.455439 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-19 01:05:33.455444 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-19 01:05:33.455449 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-19 01:05:33.455454 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-19 01:05:33.455496 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-19 01:05:33.455502 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-19 01:05:33.455507 | orchestrator | 2026-03-19 01:05:33.455512 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-19 01:05:33.455517 | orchestrator | 2026-03-19 01:05:33.455522 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-19 01:05:33.455528 | orchestrator | Thursday 19 March 2026 01:02:26 +0000 (0:00:00.706) 0:00:01.800 ******** 2026-03-19 01:05:33.455534 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 01:05:33.455540 | orchestrator | 2026-03-19 01:05:33.455546 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-19 01:05:33.455551 | orchestrator | Thursday 19 March 2026 01:02:27 +0000 (0:00:01.382) 0:00:03.182 ******** 2026-03-19 01:05:33.455623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.455645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.455650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.455657 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-19 01:05:33.455693 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.455711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.455717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.455722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.455728 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.455751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.455757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.455765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.455774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.455784 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.455790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.455794 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.455800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.455808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.455814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.455819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.455831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.455838 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-19 01:05:33.455845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.455851 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.455858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.455864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.455870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.455881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.455886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.455891 | orchestrator | 2026-03-19 01:05:33.455897 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-19 01:05:33.455903 | orchestrator | Thursday 19 March 2026 01:02:31 +0000 (0:00:03.912) 0:00:07.095 ******** 2026-03-19 01:05:33.455914 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 01:05:33.455924 | orchestrator | 2026-03-19 01:05:33.455929 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-19 01:05:33.455934 | orchestrator | Thursday 19 March 2026 01:02:33 +0000 (0:00:02.199) 0:00:09.295 ******** 2026-03-19 01:05:33.455939 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-19 01:05:33.455948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.455954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.455978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.455986 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.455989 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.455992 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.455996 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.455999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.456004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.456008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.456057 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.456068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.456073 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.456079 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.456084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.456089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.456116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.456122 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-19 01:05:33.456145 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.456152 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.456157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.456162 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.456167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.456176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.456194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.456200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.456435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.456454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.456460 | orchestrator | 2026-03-19 01:05:33.456465 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-19 01:05:33.456471 | orchestrator | Thursday 19 March 2026 01:02:39 +0000 (0:00:05.621) 0:00:14.916 ******** 2026-03-19 01:05:33.456477 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-19 01:05:33.456482 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 01:05:33.456493 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.456505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 01:05:33.456511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456522 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-19 01:05:33.456529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.456540 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 01:05:33.456575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456580 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:05:33.456586 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:33.456595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.456605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456610 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:33.456615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 01:05:33.456621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.456645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456650 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:33.456660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 01:05:33.456665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.456671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.456676 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:05:33.456682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 01:05:33.456692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.456701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.456706 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:05:33.456711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 01:05:33.456717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.456726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.456732 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:05:33.456855 | orchestrator | 2026-03-19 01:05:33.456863 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-19 01:05:33.456869 | orchestrator | Thursday 19 March 2026 01:02:40 +0000 (0:00:01.274) 0:00:16.191 ******** 2026-03-19 01:05:33.456875 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-19 01:05:33.456886 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 01:05:33.456893 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.456902 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-19 01:05:33.456909 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 01:05:33.456925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.456946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 01:05:33.456960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.456980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.456992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 01:05:33.456998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.457004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.457013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.457019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 01:05:33.457025 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:05:33.457031 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:33.457036 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:33.457041 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:33.457050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 01:05:33.457056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.457066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.457072 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:05:33.457077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 01:05:33.457083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.457091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.457097 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:05:33.457102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 01:05:33.457107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.457429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 01:05:33.457444 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:05:33.457456 | orchestrator | 2026-03-19 01:05:33.457462 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-19 01:05:33.457467 | orchestrator | Thursday 19 March 2026 01:02:42 +0000 (0:00:01.703) 0:00:17.895 ******** 2026-03-19 01:05:33.457504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.457511 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-19 01:05:33.457517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.457527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.457532 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.457538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.457548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.457568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.457573 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.457578 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.457583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.457591 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.457597 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.457603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.457611 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.457621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.457627 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.457633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.457641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.457649 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.457655 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-19 01:05:33.457667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.457709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.457717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.457722 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.457727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.457732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.457739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.457744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.457753 | orchestrator | 2026-03-19 01:05:33.457759 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-19 01:05:33.457764 | orchestrator | Thursday 19 March 2026 01:02:48 +0000 (0:00:05.783) 0:00:23.678 ******** 2026-03-19 01:05:33.457783 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 01:05:33.457789 | orchestrator | 2026-03-19 01:05:33.457794 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-19 01:05:33.457802 | orchestrator | Thursday 19 March 2026 01:02:49 +0000 (0:00:00.907) 0:00:24.586 ******** 2026-03-19 01:05:33.457833 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1360489, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1481793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457840 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1360489, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1481793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457846 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1360489, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1481793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457853 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1360512, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.154471, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457862 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1360489, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1481793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.457867 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1360512, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.154471, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457881 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1360489, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1481793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457893 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1360489, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1481793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457898 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1360512, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.154471, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457904 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1360512, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.154471, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457932 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1360483, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1461792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457943 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1360512, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.154471, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457953 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1360483, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1461792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457961 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1360489, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1481793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457967 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1360483, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1461792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457972 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1360483, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1461792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457977 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1360504, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1521792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457982 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1360504, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1521792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457990 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1360504, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1521792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.457999 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1360483, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1461792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458006 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1360512, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.154471, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458011 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1360512, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.154471, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458064 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1360504, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1521792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458070 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1360476, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1431792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458076 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1360476, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1431792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458084 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1360491, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1481793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458094 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1360476, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1431792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458099 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1360504, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1521792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458108 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1360483, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1461792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458114 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1360476, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1431792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458119 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1360501, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1511793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458125 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1360491, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1481793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458137 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1360494, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458143 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1360491, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1481793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458148 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1360483, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1461792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458157 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1360501, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1511793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458162 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1360504, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1521792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458168 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1360476, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1431792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458174 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1360487, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1474578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458185 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1360491, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1481793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458191 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360510, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1540525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458196 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1360476, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1431792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458205 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1360501, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1511793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458210 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1360491, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1481793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458216 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1360494, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458222 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1360491, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1481793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458234 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1360501, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1511793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458240 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360470, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1402292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458246 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1360501, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1511793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458255 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1360501, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1511793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458261 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1360487, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1474578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458267 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1360494, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458272 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1360527, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1568608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458284 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1360494, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458290 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1360494, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458296 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1360504, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1521792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458305 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1360487, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1474578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458311 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1360494, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458317 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1360487, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1474578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458326 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360510, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1540525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458336 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1360487, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1474578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458342 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1360508, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1534417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458348 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360510, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1540525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458357 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1360487, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1474578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458362 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360510, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1540525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458368 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360510, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1540525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458377 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360470, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1402292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458387 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360470, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1402292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458394 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360510, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1540525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458399 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360480, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1443036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458408 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1360476, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1431792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458414 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360470, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1402292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458420 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360470, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1402292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458429 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360470, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1402292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458437 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1360527, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1568608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458442 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1360527, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1568608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458448 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1360472, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1421793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458457 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1360527, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1568608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458463 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1360527, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1568608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458472 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1360527, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1568608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458478 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1360508, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1534417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458486 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1360508, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1534417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458492 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1360499, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1510663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458497 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360480, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1443036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458506 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1360491, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1481793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458512 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1360508, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1534417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458522 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1360472, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1421793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458527 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1360508, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1534417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458536 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1360508, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1534417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458542 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360480, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1443036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458547 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360480, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1443036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458584 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1360472, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1421793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458591 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360480, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1443036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458601 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1360496, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458607 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1360499, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1510663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458614 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360480, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1443036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458620 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1360501, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1511793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458625 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1360499, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1510663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458635 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1360472, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1421793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458641 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1360472, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1421793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458651 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1360524, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1566827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458656 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:33.458662 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1360472, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1421793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458671 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1360496, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458677 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1360499, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1510663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458683 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1360496, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458691 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1360499, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1510663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458701 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1360496, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458707 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1360499, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1510663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458712 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1360524, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1566827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458718 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:33.458727 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1360524, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1566827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458732 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:05:33.458738 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1360494, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458743 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1360496, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458752 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1360524, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1566827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458763 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:05:33.458769 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1360496, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458774 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1360524, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1566827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458780 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:05:33.458786 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1360524, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1566827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 01:05:33.458792 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:33.458809 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1360487, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1474578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458815 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360510, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1540525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458821 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360470, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1402292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458833 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1360527, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1568608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458839 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1360508, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1534417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458845 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1360480, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1443036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458851 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1360472, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1421793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458859 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1360499, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1510663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458866 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1360496, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458871 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1360524, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1566827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:05:33.458880 | orchestrator | 2026-03-19 01:05:33.458886 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-19 01:05:33.458892 | orchestrator | Thursday 19 March 2026 01:03:16 +0000 (0:00:27.123) 0:00:51.710 ******** 2026-03-19 01:05:33.458897 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 01:05:33.458903 | orchestrator | 2026-03-19 01:05:33.458911 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-19 01:05:33.458917 | orchestrator | Thursday 19 March 2026 01:03:17 +0000 (0:00:00.767) 0:00:52.477 ******** 2026-03-19 01:05:33.458922 | orchestrator | [WARNING]: Skipped 2026-03-19 01:05:33.458928 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 01:05:33.458934 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-19 01:05:33.458941 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 01:05:33.458946 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-19 01:05:33.458951 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 01:05:33.458957 | orchestrator | [WARNING]: Skipped 2026-03-19 01:05:33.458963 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 01:05:33.458968 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-19 01:05:33.458974 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 01:05:33.458980 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-19 01:05:33.458986 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 01:05:33.458992 | orchestrator | [WARNING]: Skipped 2026-03-19 01:05:33.458997 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 01:05:33.459003 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-19 01:05:33.459008 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 01:05:33.459014 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-19 01:05:33.459019 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 01:05:33.459025 | orchestrator | [WARNING]: Skipped 2026-03-19 01:05:33.459030 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 01:05:33.459035 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-19 01:05:33.459041 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 01:05:33.459046 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-19 01:05:33.459052 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-19 01:05:33.459058 | orchestrator | [WARNING]: Skipped 2026-03-19 01:05:33.459064 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 01:05:33.459069 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-19 01:05:33.459074 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 01:05:33.459080 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-19 01:05:33.459086 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-19 01:05:33.459091 | orchestrator | [WARNING]: Skipped 2026-03-19 01:05:33.459096 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 01:05:33.459102 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-19 01:05:33.459107 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 01:05:33.459118 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-19 01:05:33.459123 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 01:05:33.459128 | orchestrator | [WARNING]: Skipped 2026-03-19 01:05:33.459134 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 01:05:33.459139 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-19 01:05:33.459147 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 01:05:33.459153 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-19 01:05:33.459158 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 01:05:33.459164 | orchestrator | 2026-03-19 01:05:33.459169 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-19 01:05:33.459175 | orchestrator | Thursday 19 March 2026 01:03:18 +0000 (0:00:01.888) 0:00:54.366 ******** 2026-03-19 01:05:33.459180 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-19 01:05:33.459187 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-19 01:05:33.459192 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:33.459198 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:33.459204 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-19 01:05:33.459209 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:05:33.459215 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-19 01:05:33.459220 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:33.459225 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-19 01:05:33.459231 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:05:33.459237 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-19 01:05:33.459243 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:05:33.459248 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-19 01:05:33.459254 | orchestrator | 2026-03-19 01:05:33.459259 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-19 01:05:33.459264 | orchestrator | Thursday 19 March 2026 01:03:31 +0000 (0:00:12.936) 0:01:07.303 ******** 2026-03-19 01:05:33.459274 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-19 01:05:33.459280 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:33.459285 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-19 01:05:33.459291 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:33.459296 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-19 01:05:33.459301 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:33.459307 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-19 01:05:33.459311 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:05:33.459316 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-19 01:05:33.459321 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:05:33.459326 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-19 01:05:33.459332 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:05:33.459337 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-19 01:05:33.459342 | orchestrator | 2026-03-19 01:05:33.459347 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-19 01:05:33.459352 | orchestrator | Thursday 19 March 2026 01:03:34 +0000 (0:00:02.660) 0:01:09.964 ******** 2026-03-19 01:05:33.459362 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-19 01:05:33.459369 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-19 01:05:33.459374 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:05:33.459380 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:33.459385 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-19 01:05:33.459391 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:33.459396 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-19 01:05:33.459405 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:33.459411 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-19 01:05:33.459416 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:05:33.459421 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-19 01:05:33.459426 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-19 01:05:33.459432 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:05:33.459437 | orchestrator | 2026-03-19 01:05:33.459442 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-19 01:05:33.459447 | orchestrator | Thursday 19 March 2026 01:03:36 +0000 (0:00:01.833) 0:01:11.797 ******** 2026-03-19 01:05:33.459452 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 01:05:33.459457 | orchestrator | 2026-03-19 01:05:33.459462 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-19 01:05:33.459470 | orchestrator | Thursday 19 March 2026 01:03:37 +0000 (0:00:00.697) 0:01:12.495 ******** 2026-03-19 01:05:33.459475 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:05:33.459481 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:33.459486 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:33.459491 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:33.459496 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:05:33.459499 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:05:33.459504 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:05:33.459509 | orchestrator | 2026-03-19 01:05:33.459514 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-19 01:05:33.459519 | orchestrator | Thursday 19 March 2026 01:03:37 +0000 (0:00:00.725) 0:01:13.220 ******** 2026-03-19 01:05:33.459523 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:05:33.459528 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:05:33.459534 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:05:33.459539 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:05:33.459545 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:05:33.459549 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:05:33.459554 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:05:33.459572 | orchestrator | 2026-03-19 01:05:33.459577 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-19 01:05:33.459582 | orchestrator | Thursday 19 March 2026 01:03:39 +0000 (0:00:02.210) 0:01:15.431 ******** 2026-03-19 01:05:33.459587 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-19 01:05:33.459593 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:05:33.459599 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-19 01:05:33.459603 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:33.459606 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-19 01:05:33.459616 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:33.459619 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-19 01:05:33.459622 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:05:33.459631 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-19 01:05:33.459634 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:33.459637 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-19 01:05:33.459640 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:05:33.459643 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-19 01:05:33.459646 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:05:33.459649 | orchestrator | 2026-03-19 01:05:33.459652 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-19 01:05:33.459655 | orchestrator | Thursday 19 March 2026 01:03:41 +0000 (0:00:01.695) 0:01:17.126 ******** 2026-03-19 01:05:33.459659 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-19 01:05:33.459663 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:33.459668 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-19 01:05:33.459674 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-19 01:05:33.459679 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-19 01:05:33.459684 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:33.459689 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:33.459693 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:05:33.459698 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-19 01:05:33.459702 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-19 01:05:33.459707 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:05:33.459711 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-19 01:05:33.459717 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:05:33.459722 | orchestrator | 2026-03-19 01:05:33.459727 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-19 01:05:33.459732 | orchestrator | Thursday 19 March 2026 01:03:43 +0000 (0:00:01.538) 0:01:18.665 ******** 2026-03-19 01:05:33.459737 | orchestrator | [WARNING]: Skipped 2026-03-19 01:05:33.459742 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-19 01:05:33.459748 | orchestrator | due to this access issue: 2026-03-19 01:05:33.459753 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-19 01:05:33.459757 | orchestrator | not a directory 2026-03-19 01:05:33.459762 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 01:05:33.459768 | orchestrator | 2026-03-19 01:05:33.459772 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-19 01:05:33.459777 | orchestrator | Thursday 19 March 2026 01:03:44 +0000 (0:00:01.131) 0:01:19.796 ******** 2026-03-19 01:05:33.459783 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:05:33.459787 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:33.459792 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:33.459797 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:33.459802 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:05:33.459806 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:05:33.459811 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:05:33.459816 | orchestrator | 2026-03-19 01:05:33.459827 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-19 01:05:33.459837 | orchestrator | Thursday 19 March 2026 01:03:44 +0000 (0:00:00.632) 0:01:20.429 ******** 2026-03-19 01:05:33.459842 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:05:33.459847 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:33.459852 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:33.459857 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:33.459862 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:05:33.459867 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:05:33.459872 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:05:33.459877 | orchestrator | 2026-03-19 01:05:33.459882 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-19 01:05:33.459888 | orchestrator | Thursday 19 March 2026 01:03:45 +0000 (0:00:00.786) 0:01:21.216 ******** 2026-03-19 01:05:33.459894 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-19 01:05:33.459907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.459914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.459919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.459924 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.459930 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.459944 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.459950 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 01:05:33.459956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.459964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.459970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.459976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.459982 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.459991 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.460003 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.460009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.460014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.460023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.460029 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.460034 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-19 01:05:33.460045 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.460053 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.460058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.460063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.460071 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.460077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 01:05:33.460082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.460087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.460096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 01:05:33.460101 | orchestrator | 2026-03-19 01:05:33.460106 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-19 01:05:33.460112 | orchestrator | Thursday 19 March 2026 01:03:49 +0000 (0:00:03.881) 0:01:25.098 ******** 2026-03-19 01:05:33.460117 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-19 01:05:33.460124 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:05:33.460130 | orchestrator | 2026-03-19 01:05:33.460135 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-19 01:05:33.460140 | orchestrator | Thursday 19 March 2026 01:03:50 +0000 (0:00:00.915) 0:01:26.013 ******** 2026-03-19 01:05:33.460146 | orchestrator | 2026-03-19 01:05:33.460150 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-19 01:05:33.460155 | orchestrator | Thursday 19 March 2026 01:03:50 +0000 (0:00:00.066) 0:01:26.079 ******** 2026-03-19 01:05:33.460160 | orchestrator | 2026-03-19 01:05:33.460165 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-19 01:05:33.460169 | orchestrator | Thursday 19 March 2026 01:03:50 +0000 (0:00:00.061) 0:01:26.141 ******** 2026-03-19 01:05:33.460174 | orchestrator | 2026-03-19 01:05:33.460180 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-19 01:05:33.460185 | orchestrator | Thursday 19 March 2026 01:03:50 +0000 (0:00:00.095) 0:01:26.236 ******** 2026-03-19 01:05:33.460190 | orchestrator | 2026-03-19 01:05:33.460195 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-19 01:05:33.460200 | orchestrator | Thursday 19 March 2026 01:03:50 +0000 (0:00:00.060) 0:01:26.296 ******** 2026-03-19 01:05:33.460205 | orchestrator | 2026-03-19 01:05:33.460210 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-19 01:05:33.460215 | orchestrator | Thursday 19 March 2026 01:03:50 +0000 (0:00:00.059) 0:01:26.356 ******** 2026-03-19 01:05:33.460220 | orchestrator | 2026-03-19 01:05:33.460225 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-19 01:05:33.460230 | orchestrator | Thursday 19 March 2026 01:03:50 +0000 (0:00:00.059) 0:01:26.415 ******** 2026-03-19 01:05:33.460235 | orchestrator | 2026-03-19 01:05:33.460240 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-19 01:05:33.460245 | orchestrator | Thursday 19 March 2026 01:03:51 +0000 (0:00:00.090) 0:01:26.506 ******** 2026-03-19 01:05:33.460250 | orchestrator | changed: [testbed-manager] 2026-03-19 01:05:33.460254 | orchestrator | 2026-03-19 01:05:33.460259 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-19 01:05:33.460269 | orchestrator | Thursday 19 March 2026 01:04:10 +0000 (0:00:19.774) 0:01:46.281 ******** 2026-03-19 01:05:33.460274 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:05:33.460279 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:05:33.460285 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:05:33.460290 | orchestrator | changed: [testbed-manager] 2026-03-19 01:05:33.460295 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:05:33.460305 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:05:33.460311 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:05:33.460316 | orchestrator | 2026-03-19 01:05:33.460321 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-19 01:05:33.460326 | orchestrator | Thursday 19 March 2026 01:04:24 +0000 (0:00:13.633) 0:01:59.914 ******** 2026-03-19 01:05:33.460332 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:05:33.460337 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:05:33.460342 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:05:33.460347 | orchestrator | 2026-03-19 01:05:33.460352 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-19 01:05:33.460358 | orchestrator | Thursday 19 March 2026 01:04:30 +0000 (0:00:05.817) 0:02:05.731 ******** 2026-03-19 01:05:33.460363 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:05:33.460368 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:05:33.460374 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:05:33.460377 | orchestrator | 2026-03-19 01:05:33.460380 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-19 01:05:33.460383 | orchestrator | Thursday 19 March 2026 01:04:41 +0000 (0:00:11.490) 0:02:17.222 ******** 2026-03-19 01:05:33.460386 | orchestrator | changed: [testbed-manager] 2026-03-19 01:05:33.460390 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:05:33.460393 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:05:33.460396 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:05:33.460399 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:05:33.460402 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:05:33.460405 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:05:33.460408 | orchestrator | 2026-03-19 01:05:33.460411 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-19 01:05:33.460415 | orchestrator | Thursday 19 March 2026 01:04:55 +0000 (0:00:13.502) 0:02:30.725 ******** 2026-03-19 01:05:33.460418 | orchestrator | changed: [testbed-manager] 2026-03-19 01:05:33.460421 | orchestrator | 2026-03-19 01:05:33.460424 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-19 01:05:33.460427 | orchestrator | Thursday 19 March 2026 01:05:06 +0000 (0:00:10.766) 0:02:41.491 ******** 2026-03-19 01:05:33.460430 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:05:33.460433 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:05:33.460436 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:05:33.460440 | orchestrator | 2026-03-19 01:05:33.460443 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-19 01:05:33.460446 | orchestrator | Thursday 19 March 2026 01:05:11 +0000 (0:00:05.018) 0:02:46.509 ******** 2026-03-19 01:05:33.460449 | orchestrator | changed: [testbed-manager] 2026-03-19 01:05:33.460452 | orchestrator | 2026-03-19 01:05:33.460455 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-19 01:05:33.460458 | orchestrator | Thursday 19 March 2026 01:05:21 +0000 (0:00:10.014) 0:02:56.524 ******** 2026-03-19 01:05:33.460461 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:05:33.460464 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:05:33.460467 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:05:33.460470 | orchestrator | 2026-03-19 01:05:33.460473 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:05:33.460478 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-19 01:05:33.460489 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-19 01:05:33.460494 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-19 01:05:33.460499 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-19 01:05:33.460507 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-19 01:05:33.460512 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-19 01:05:33.460517 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-19 01:05:33.460521 | orchestrator | 2026-03-19 01:05:33.460526 | orchestrator | 2026-03-19 01:05:33.460531 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:05:33.460536 | orchestrator | Thursday 19 March 2026 01:05:30 +0000 (0:00:09.837) 0:03:06.362 ******** 2026-03-19 01:05:33.460541 | orchestrator | =============================================================================== 2026-03-19 01:05:33.460545 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 27.12s 2026-03-19 01:05:33.460549 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 19.77s 2026-03-19 01:05:33.460553 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.63s 2026-03-19 01:05:33.460591 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.50s 2026-03-19 01:05:33.460596 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 12.94s 2026-03-19 01:05:33.460605 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.49s 2026-03-19 01:05:33.460610 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 10.77s 2026-03-19 01:05:33.460614 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.01s 2026-03-19 01:05:33.460619 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.84s 2026-03-19 01:05:33.460624 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.82s 2026-03-19 01:05:33.460629 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.78s 2026-03-19 01:05:33.460634 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.62s 2026-03-19 01:05:33.460639 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.02s 2026-03-19 01:05:33.460644 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.91s 2026-03-19 01:05:33.460649 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.88s 2026-03-19 01:05:33.460654 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.66s 2026-03-19 01:05:33.460660 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.21s 2026-03-19 01:05:33.460666 | orchestrator | prometheus : include_tasks ---------------------------------------------- 2.20s 2026-03-19 01:05:33.460671 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.89s 2026-03-19 01:05:33.460676 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.83s 2026-03-19 01:05:33.460681 | orchestrator | 2026-03-19 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:36.477060 | orchestrator | 2026-03-19 01:05:36 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:36.478423 | orchestrator | 2026-03-19 01:05:36 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:05:36.478915 | orchestrator | 2026-03-19 01:05:36 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:36.479714 | orchestrator | 2026-03-19 01:05:36 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:36.479754 | orchestrator | 2026-03-19 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:39.519448 | orchestrator | 2026-03-19 01:05:39 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:39.520971 | orchestrator | 2026-03-19 01:05:39 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:05:39.522196 | orchestrator | 2026-03-19 01:05:39 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:39.524066 | orchestrator | 2026-03-19 01:05:39 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:39.524120 | orchestrator | 2026-03-19 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:42.572708 | orchestrator | 2026-03-19 01:05:42 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:42.573227 | orchestrator | 2026-03-19 01:05:42 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:05:42.574646 | orchestrator | 2026-03-19 01:05:42 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:42.577068 | orchestrator | 2026-03-19 01:05:42 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:42.577129 | orchestrator | 2026-03-19 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:45.618785 | orchestrator | 2026-03-19 01:05:45 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:45.620101 | orchestrator | 2026-03-19 01:05:45 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:05:45.623944 | orchestrator | 2026-03-19 01:05:45 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:45.625245 | orchestrator | 2026-03-19 01:05:45 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:45.625277 | orchestrator | 2026-03-19 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:48.673102 | orchestrator | 2026-03-19 01:05:48 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:48.674421 | orchestrator | 2026-03-19 01:05:48 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:05:48.677753 | orchestrator | 2026-03-19 01:05:48 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:48.679727 | orchestrator | 2026-03-19 01:05:48 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:48.679784 | orchestrator | 2026-03-19 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:51.720675 | orchestrator | 2026-03-19 01:05:51 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:51.722472 | orchestrator | 2026-03-19 01:05:51 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:05:51.723775 | orchestrator | 2026-03-19 01:05:51 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:51.723846 | orchestrator | 2026-03-19 01:05:51 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:51.723853 | orchestrator | 2026-03-19 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:54.761685 | orchestrator | 2026-03-19 01:05:54 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state STARTED 2026-03-19 01:05:54.763132 | orchestrator | 2026-03-19 01:05:54 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:05:54.765295 | orchestrator | 2026-03-19 01:05:54 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:54.767290 | orchestrator | 2026-03-19 01:05:54 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:54.767375 | orchestrator | 2026-03-19 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:05:57.809216 | orchestrator | 2026-03-19 01:05:57 | INFO  | Task e85ba977-5748-473a-b160-d17193790fb8 is in state SUCCESS 2026-03-19 01:05:57.810818 | orchestrator | 2026-03-19 01:05:57.810882 | orchestrator | 2026-03-19 01:05:57.810891 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:05:57.810901 | orchestrator | 2026-03-19 01:05:57.810907 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 01:05:57.810913 | orchestrator | Thursday 19 March 2026 01:03:31 +0000 (0:00:00.231) 0:00:00.231 ******** 2026-03-19 01:05:57.810918 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:05:57.810924 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:05:57.810929 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:05:57.810935 | orchestrator | 2026-03-19 01:05:57.810940 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 01:05:57.810945 | orchestrator | Thursday 19 March 2026 01:03:31 +0000 (0:00:00.230) 0:00:00.462 ******** 2026-03-19 01:05:57.810951 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-19 01:05:57.810956 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-19 01:05:57.810962 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-19 01:05:57.810967 | orchestrator | 2026-03-19 01:05:57.810973 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-19 01:05:57.810978 | orchestrator | 2026-03-19 01:05:57.810984 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-19 01:05:57.810989 | orchestrator | Thursday 19 March 2026 01:03:31 +0000 (0:00:00.205) 0:00:00.667 ******** 2026-03-19 01:05:57.810995 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:05:57.811001 | orchestrator | 2026-03-19 01:05:57.811007 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-19 01:05:57.811013 | orchestrator | Thursday 19 March 2026 01:03:32 +0000 (0:00:00.581) 0:00:01.248 ******** 2026-03-19 01:05:57.811033 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-19 01:05:57.811039 | orchestrator | 2026-03-19 01:05:57.811044 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-19 01:05:57.811049 | orchestrator | Thursday 19 March 2026 01:03:36 +0000 (0:00:04.015) 0:00:05.264 ******** 2026-03-19 01:05:57.811054 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-19 01:05:57.811061 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-19 01:05:57.811067 | orchestrator | 2026-03-19 01:05:57.811072 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-19 01:05:57.811078 | orchestrator | Thursday 19 March 2026 01:03:42 +0000 (0:00:06.666) 0:00:11.930 ******** 2026-03-19 01:05:57.811084 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 01:05:57.811090 | orchestrator | 2026-03-19 01:05:57.811095 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-19 01:05:57.811100 | orchestrator | Thursday 19 March 2026 01:03:45 +0000 (0:00:03.081) 0:00:15.012 ******** 2026-03-19 01:05:57.811106 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-19 01:05:57.811112 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 01:05:57.811118 | orchestrator | 2026-03-19 01:05:57.811124 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-19 01:05:57.811129 | orchestrator | Thursday 19 March 2026 01:03:49 +0000 (0:00:03.829) 0:00:18.841 ******** 2026-03-19 01:05:57.811135 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 01:05:57.811140 | orchestrator | 2026-03-19 01:05:57.811145 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-19 01:05:57.811170 | orchestrator | Thursday 19 March 2026 01:03:52 +0000 (0:00:03.216) 0:00:22.059 ******** 2026-03-19 01:05:57.811176 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-19 01:05:57.811181 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-19 01:05:57.811187 | orchestrator | 2026-03-19 01:05:57.811192 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-19 01:05:57.811197 | orchestrator | Thursday 19 March 2026 01:03:59 +0000 (0:00:06.805) 0:00:28.864 ******** 2026-03-19 01:05:57.811206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 01:05:57.811336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 01:05:57.811353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 01:05:57.811374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811436 | orchestrator | 2026-03-19 01:05:57.811440 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-19 01:05:57.811444 | orchestrator | Thursday 19 March 2026 01:04:02 +0000 (0:00:03.012) 0:00:31.876 ******** 2026-03-19 01:05:57.811447 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:57.811451 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:57.811455 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:57.811459 | orchestrator | 2026-03-19 01:05:57.811463 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-19 01:05:57.811467 | orchestrator | Thursday 19 March 2026 01:04:03 +0000 (0:00:00.727) 0:00:32.604 ******** 2026-03-19 01:05:57.811471 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:05:57.811475 | orchestrator | 2026-03-19 01:05:57.811479 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-19 01:05:57.811483 | orchestrator | Thursday 19 March 2026 01:04:04 +0000 (0:00:00.861) 0:00:33.466 ******** 2026-03-19 01:05:57.811489 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-19 01:05:57.811493 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-19 01:05:57.811496 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-19 01:05:57.811500 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-19 01:05:57.811504 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-19 01:05:57.811507 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-19 01:05:57.811522 | orchestrator | 2026-03-19 01:05:57.811526 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-19 01:05:57.811530 | orchestrator | Thursday 19 March 2026 01:04:06 +0000 (0:00:02.055) 0:00:35.521 ******** 2026-03-19 01:05:57.811537 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-19 01:05:57.811544 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-19 01:05:57.811549 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-19 01:05:57.811553 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-19 01:05:57.811561 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-19 01:05:57.811565 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-19 01:05:57.811574 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-19 01:05:57.811580 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-19 01:05:57.811586 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-19 01:05:57.811599 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-19 01:05:57.811605 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-19 01:05:57.811613 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-19 01:05:57.811630 | orchestrator | 2026-03-19 01:05:57.811635 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-19 01:05:57.811640 | orchestrator | Thursday 19 March 2026 01:04:09 +0000 (0:00:03.556) 0:00:39.077 ******** 2026-03-19 01:05:57.811645 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 01:05:57.811651 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 01:05:57.811656 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 01:05:57.811662 | orchestrator | 2026-03-19 01:05:57.811668 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-19 01:05:57.811674 | orchestrator | Thursday 19 March 2026 01:04:11 +0000 (0:00:01.852) 0:00:40.929 ******** 2026-03-19 01:05:57.811679 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-19 01:05:57.811685 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-19 01:05:57.811691 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-19 01:05:57.811696 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-19 01:05:57.811701 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-19 01:05:57.811709 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-19 01:05:57.811715 | orchestrator | 2026-03-19 01:05:57.811720 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-19 01:05:57.811725 | orchestrator | Thursday 19 March 2026 01:04:15 +0000 (0:00:04.012) 0:00:44.942 ******** 2026-03-19 01:05:57.811730 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-19 01:05:57.811735 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-19 01:05:57.811740 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-19 01:05:57.811745 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-19 01:05:57.811750 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-19 01:05:57.811754 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-19 01:05:57.811760 | orchestrator | 2026-03-19 01:05:57.811764 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-19 01:05:57.811770 | orchestrator | Thursday 19 March 2026 01:04:16 +0000 (0:00:01.057) 0:00:45.999 ******** 2026-03-19 01:05:57.811775 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:57.811780 | orchestrator | 2026-03-19 01:05:57.811785 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-19 01:05:57.811790 | orchestrator | Thursday 19 March 2026 01:04:16 +0000 (0:00:00.177) 0:00:46.177 ******** 2026-03-19 01:05:57.811794 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:57.811800 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:57.811805 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:57.811811 | orchestrator | 2026-03-19 01:05:57.811816 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-19 01:05:57.811823 | orchestrator | Thursday 19 March 2026 01:04:17 +0000 (0:00:00.387) 0:00:46.564 ******** 2026-03-19 01:05:57.811829 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:05:57.811839 | orchestrator | 2026-03-19 01:05:57.811844 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-19 01:05:57.811854 | orchestrator | Thursday 19 March 2026 01:04:17 +0000 (0:00:00.560) 0:00:47.125 ******** 2026-03-19 01:05:57.811859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 01:05:57.811869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 01:05:57.811875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 01:05:57.811881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.811930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812117 | orchestrator | 2026-03-19 01:05:57.812120 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-19 01:05:57.812124 | orchestrator | Thursday 19 March 2026 01:04:21 +0000 (0:00:03.946) 0:00:51.072 ******** 2026-03-19 01:05:57.812131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 01:05:57.812135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812150 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:57.812157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 01:05:57.812161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812173 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:57.812177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 01:05:57.812183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812195 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:57.812198 | orchestrator | 2026-03-19 01:05:57.812203 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-19 01:05:57.812207 | orchestrator | Thursday 19 March 2026 01:04:22 +0000 (0:00:01.009) 0:00:52.081 ******** 2026-03-19 01:05:57.812210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 01:05:57.812213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 01:05:57.812225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812241 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:57.812244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812250 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:57.812254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 01:05:57.812259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812271 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:57.812274 | orchestrator | 2026-03-19 01:05:57.812278 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-19 01:05:57.812281 | orchestrator | Thursday 19 March 2026 01:04:23 +0000 (0:00:00.835) 0:00:52.917 ******** 2026-03-19 01:05:57.812284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 01:05:57.812290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 01:05:57.812295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 01:05:57.812301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812385 | orchestrator | 2026-03-19 01:05:57.812390 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-19 01:05:57.812412 | orchestrator | Thursday 19 March 2026 01:04:27 +0000 (0:00:03.752) 0:00:56.669 ******** 2026-03-19 01:05:57.812418 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-19 01:05:57.812423 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-19 01:05:57.812428 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-19 01:05:57.812433 | orchestrator | 2026-03-19 01:05:57.812438 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-19 01:05:57.812443 | orchestrator | Thursday 19 March 2026 01:04:29 +0000 (0:00:01.840) 0:00:58.510 ******** 2026-03-19 01:05:57.812452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 01:05:57.812456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 01:05:57.812462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 01:05:57.812468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812676 | orchestrator | 2026-03-19 01:05:57.812682 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-19 01:05:57.812687 | orchestrator | Thursday 19 March 2026 01:04:41 +0000 (0:00:12.051) 0:01:10.562 ******** 2026-03-19 01:05:57.812692 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:05:57.812697 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:05:57.812702 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:05:57.812707 | orchestrator | 2026-03-19 01:05:57.812712 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-03-19 01:05:57.812721 | orchestrator | Thursday 19 March 2026 01:04:42 +0000 (0:00:01.642) 0:01:12.205 ******** 2026-03-19 01:05:57.812727 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:05:57.812732 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:05:57.812737 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:05:57.812741 | orchestrator | 2026-03-19 01:05:57.812747 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-19 01:05:57.812752 | orchestrator | Thursday 19 March 2026 01:04:45 +0000 (0:00:02.191) 0:01:14.397 ******** 2026-03-19 01:05:57.812764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 01:05:57.812777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812795 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:57.812803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 01:05:57.812809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812827 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:57.812831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 01:05:57.812834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 01:05:57.812853 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:57.812858 | orchestrator | 2026-03-19 01:05:57.812864 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-19 01:05:57.812869 | orchestrator | Thursday 19 March 2026 01:04:46 +0000 (0:00:01.107) 0:01:15.504 ******** 2026-03-19 01:05:57.812874 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:57.812880 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:57.812885 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:57.812904 | orchestrator | 2026-03-19 01:05:57.812910 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-19 01:05:57.812917 | orchestrator | Thursday 19 March 2026 01:04:46 +0000 (0:00:00.347) 0:01:15.852 ******** 2026-03-19 01:05:57.812922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 01:05:57.812926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 01:05:57.812929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 01:05:57.812939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.812993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.813003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 01:05:57.813010 | orchestrator | 2026-03-19 01:05:57.813013 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-19 01:05:57.813016 | orchestrator | Thursday 19 March 2026 01:04:49 +0000 (0:00:03.310) 0:01:19.162 ******** 2026-03-19 01:05:57.813019 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:57.813024 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:05:57.813029 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:05:57.813035 | orchestrator | 2026-03-19 01:05:57.813040 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-19 01:05:57.813045 | orchestrator | Thursday 19 March 2026 01:04:50 +0000 (0:00:00.254) 0:01:19.417 ******** 2026-03-19 01:05:57.813050 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:05:57.813056 | orchestrator | 2026-03-19 01:05:57.813061 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-19 01:05:57.813066 | orchestrator | Thursday 19 March 2026 01:04:52 +0000 (0:00:01.990) 0:01:21.408 ******** 2026-03-19 01:05:57.813071 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:05:57.813076 | orchestrator | 2026-03-19 01:05:57.813081 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-19 01:05:57.813087 | orchestrator | Thursday 19 March 2026 01:04:54 +0000 (0:00:02.052) 0:01:23.460 ******** 2026-03-19 01:05:57.813092 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:05:57.813097 | orchestrator | 2026-03-19 01:05:57.813102 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-19 01:05:57.813107 | orchestrator | Thursday 19 March 2026 01:05:11 +0000 (0:00:17.154) 0:01:40.615 ******** 2026-03-19 01:05:57.813112 | orchestrator | 2026-03-19 01:05:57.813118 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-19 01:05:57.813122 | orchestrator | Thursday 19 March 2026 01:05:11 +0000 (0:00:00.060) 0:01:40.675 ******** 2026-03-19 01:05:57.813128 | orchestrator | 2026-03-19 01:05:57.813133 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-19 01:05:57.813138 | orchestrator | Thursday 19 March 2026 01:05:11 +0000 (0:00:00.060) 0:01:40.736 ******** 2026-03-19 01:05:57.813143 | orchestrator | 2026-03-19 01:05:57.813149 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-19 01:05:57.813157 | orchestrator | Thursday 19 March 2026 01:05:11 +0000 (0:00:00.060) 0:01:40.797 ******** 2026-03-19 01:05:57.813162 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:05:57.813168 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:05:57.813173 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:05:57.813178 | orchestrator | 2026-03-19 01:05:57.813183 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-19 01:05:57.813188 | orchestrator | Thursday 19 March 2026 01:05:29 +0000 (0:00:17.670) 0:01:58.467 ******** 2026-03-19 01:05:57.813193 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:05:57.813198 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:05:57.813203 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:05:57.813208 | orchestrator | 2026-03-19 01:05:57.813213 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-19 01:05:57.813219 | orchestrator | Thursday 19 March 2026 01:05:34 +0000 (0:00:05.101) 0:02:03.569 ******** 2026-03-19 01:05:57.813224 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:05:57.813229 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:05:57.813235 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:05:57.813240 | orchestrator | 2026-03-19 01:05:57.813245 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-19 01:05:57.813253 | orchestrator | Thursday 19 March 2026 01:05:51 +0000 (0:00:16.664) 0:02:20.233 ******** 2026-03-19 01:05:57.813259 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:05:57.813264 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:05:57.813269 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:05:57.813274 | orchestrator | 2026-03-19 01:05:57.813280 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-19 01:05:57.813285 | orchestrator | Thursday 19 March 2026 01:05:56 +0000 (0:00:05.235) 0:02:25.469 ******** 2026-03-19 01:05:57.813290 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:05:57.813296 | orchestrator | 2026-03-19 01:05:57.813301 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:05:57.813307 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-19 01:05:57.813313 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 01:05:57.813319 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 01:05:57.813324 | orchestrator | 2026-03-19 01:05:57.813329 | orchestrator | 2026-03-19 01:05:57.813335 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:05:57.813340 | orchestrator | Thursday 19 March 2026 01:05:56 +0000 (0:00:00.241) 0:02:25.710 ******** 2026-03-19 01:05:57.813346 | orchestrator | =============================================================================== 2026-03-19 01:05:57.813351 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 17.67s 2026-03-19 01:05:57.813359 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.15s 2026-03-19 01:05:57.813365 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 16.66s 2026-03-19 01:05:57.813370 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.05s 2026-03-19 01:05:57.813375 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.81s 2026-03-19 01:05:57.813380 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.67s 2026-03-19 01:05:57.813386 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.24s 2026-03-19 01:05:57.813391 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.10s 2026-03-19 01:05:57.813397 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.02s 2026-03-19 01:05:57.813405 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.01s 2026-03-19 01:05:57.813411 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.95s 2026-03-19 01:05:57.813429 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.83s 2026-03-19 01:05:57.813434 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.75s 2026-03-19 01:05:57.813439 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.56s 2026-03-19 01:05:57.813445 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.31s 2026-03-19 01:05:57.813450 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.22s 2026-03-19 01:05:57.813455 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.08s 2026-03-19 01:05:57.813460 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.01s 2026-03-19 01:05:57.813465 | orchestrator | cinder : Generating 'hostid' file for cinder_volume --------------------- 2.19s 2026-03-19 01:05:57.813471 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.06s 2026-03-19 01:05:57.813477 | orchestrator | 2026-03-19 01:05:57 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:05:57.813482 | orchestrator | 2026-03-19 01:05:57 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:05:57.813558 | orchestrator | 2026-03-19 01:05:57 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:05:57.813566 | orchestrator | 2026-03-19 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:00.855954 | orchestrator | 2026-03-19 01:06:00 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:00.857657 | orchestrator | 2026-03-19 01:06:00 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:00.859485 | orchestrator | 2026-03-19 01:06:00 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:00.859559 | orchestrator | 2026-03-19 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:03.908747 | orchestrator | 2026-03-19 01:06:03 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:03.912069 | orchestrator | 2026-03-19 01:06:03 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:03.915116 | orchestrator | 2026-03-19 01:06:03 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:03.915324 | orchestrator | 2026-03-19 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:06.960775 | orchestrator | 2026-03-19 01:06:06 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:06.963524 | orchestrator | 2026-03-19 01:06:06 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:06.966692 | orchestrator | 2026-03-19 01:06:06 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:06.966965 | orchestrator | 2026-03-19 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:10.014463 | orchestrator | 2026-03-19 01:06:10 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:10.021564 | orchestrator | 2026-03-19 01:06:10 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:10.023165 | orchestrator | 2026-03-19 01:06:10 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:10.023216 | orchestrator | 2026-03-19 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:13.064807 | orchestrator | 2026-03-19 01:06:13 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:13.066344 | orchestrator | 2026-03-19 01:06:13 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:13.068226 | orchestrator | 2026-03-19 01:06:13 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:13.068303 | orchestrator | 2026-03-19 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:16.106890 | orchestrator | 2026-03-19 01:06:16 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:16.108358 | orchestrator | 2026-03-19 01:06:16 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:16.110084 | orchestrator | 2026-03-19 01:06:16 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:16.110139 | orchestrator | 2026-03-19 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:19.153283 | orchestrator | 2026-03-19 01:06:19 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:19.153339 | orchestrator | 2026-03-19 01:06:19 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:19.154179 | orchestrator | 2026-03-19 01:06:19 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:19.154206 | orchestrator | 2026-03-19 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:22.184367 | orchestrator | 2026-03-19 01:06:22 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:22.184574 | orchestrator | 2026-03-19 01:06:22 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:22.185171 | orchestrator | 2026-03-19 01:06:22 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:22.185219 | orchestrator | 2026-03-19 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:25.222324 | orchestrator | 2026-03-19 01:06:25 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:25.223739 | orchestrator | 2026-03-19 01:06:25 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:25.225287 | orchestrator | 2026-03-19 01:06:25 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:25.225325 | orchestrator | 2026-03-19 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:28.269278 | orchestrator | 2026-03-19 01:06:28 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:28.273198 | orchestrator | 2026-03-19 01:06:28 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:28.277110 | orchestrator | 2026-03-19 01:06:28 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:28.277177 | orchestrator | 2026-03-19 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:31.317535 | orchestrator | 2026-03-19 01:06:31 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:31.318596 | orchestrator | 2026-03-19 01:06:31 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:31.320278 | orchestrator | 2026-03-19 01:06:31 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:31.320315 | orchestrator | 2026-03-19 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:34.365997 | orchestrator | 2026-03-19 01:06:34 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:34.367040 | orchestrator | 2026-03-19 01:06:34 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:34.368481 | orchestrator | 2026-03-19 01:06:34 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:34.368521 | orchestrator | 2026-03-19 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:37.416464 | orchestrator | 2026-03-19 01:06:37 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:37.418224 | orchestrator | 2026-03-19 01:06:37 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:37.420707 | orchestrator | 2026-03-19 01:06:37 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:37.420755 | orchestrator | 2026-03-19 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:40.463304 | orchestrator | 2026-03-19 01:06:40 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:40.463985 | orchestrator | 2026-03-19 01:06:40 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:40.464950 | orchestrator | 2026-03-19 01:06:40 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:40.464987 | orchestrator | 2026-03-19 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:43.509192 | orchestrator | 2026-03-19 01:06:43 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:43.509697 | orchestrator | 2026-03-19 01:06:43 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:43.511409 | orchestrator | 2026-03-19 01:06:43 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:43.511501 | orchestrator | 2026-03-19 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:46.559550 | orchestrator | 2026-03-19 01:06:46 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:46.561511 | orchestrator | 2026-03-19 01:06:46 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:46.563961 | orchestrator | 2026-03-19 01:06:46 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:46.564034 | orchestrator | 2026-03-19 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:49.607657 | orchestrator | 2026-03-19 01:06:49 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:49.608623 | orchestrator | 2026-03-19 01:06:49 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:49.610235 | orchestrator | 2026-03-19 01:06:49 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:49.610291 | orchestrator | 2026-03-19 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:52.655027 | orchestrator | 2026-03-19 01:06:52 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:52.658575 | orchestrator | 2026-03-19 01:06:52 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:52.660371 | orchestrator | 2026-03-19 01:06:52 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:52.660523 | orchestrator | 2026-03-19 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:55.700985 | orchestrator | 2026-03-19 01:06:55 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:55.702929 | orchestrator | 2026-03-19 01:06:55 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:55.704704 | orchestrator | 2026-03-19 01:06:55 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:55.704755 | orchestrator | 2026-03-19 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:06:58.741540 | orchestrator | 2026-03-19 01:06:58 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:06:58.742721 | orchestrator | 2026-03-19 01:06:58 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:06:58.744251 | orchestrator | 2026-03-19 01:06:58 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:06:58.744306 | orchestrator | 2026-03-19 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:01.793162 | orchestrator | 2026-03-19 01:07:01 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:07:01.794976 | orchestrator | 2026-03-19 01:07:01 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:01.796961 | orchestrator | 2026-03-19 01:07:01 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:01.797029 | orchestrator | 2026-03-19 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:04.840184 | orchestrator | 2026-03-19 01:07:04 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:07:04.842227 | orchestrator | 2026-03-19 01:07:04 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:04.844229 | orchestrator | 2026-03-19 01:07:04 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:04.844536 | orchestrator | 2026-03-19 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:07.887095 | orchestrator | 2026-03-19 01:07:07 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:07:07.887186 | orchestrator | 2026-03-19 01:07:07 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:07.888674 | orchestrator | 2026-03-19 01:07:07 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:07.888722 | orchestrator | 2026-03-19 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:10.934537 | orchestrator | 2026-03-19 01:07:10 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:07:10.937020 | orchestrator | 2026-03-19 01:07:10 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:10.938779 | orchestrator | 2026-03-19 01:07:10 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:10.938836 | orchestrator | 2026-03-19 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:13.984879 | orchestrator | 2026-03-19 01:07:13 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:07:13.985806 | orchestrator | 2026-03-19 01:07:13 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:13.989123 | orchestrator | 2026-03-19 01:07:13 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:13.989201 | orchestrator | 2026-03-19 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:17.031950 | orchestrator | 2026-03-19 01:07:17 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:07:17.033517 | orchestrator | 2026-03-19 01:07:17 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:17.034681 | orchestrator | 2026-03-19 01:07:17 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:17.034931 | orchestrator | 2026-03-19 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:20.083757 | orchestrator | 2026-03-19 01:07:20 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:07:20.086402 | orchestrator | 2026-03-19 01:07:20 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:20.089420 | orchestrator | 2026-03-19 01:07:20 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:20.089605 | orchestrator | 2026-03-19 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:23.127919 | orchestrator | 2026-03-19 01:07:23 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state STARTED 2026-03-19 01:07:23.128715 | orchestrator | 2026-03-19 01:07:23 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:23.130125 | orchestrator | 2026-03-19 01:07:23 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:23.130166 | orchestrator | 2026-03-19 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:26.170372 | orchestrator | 2026-03-19 01:07:26 | INFO  | Task d6d632f9-d016-43d8-8997-3ffddf9557cc is in state SUCCESS 2026-03-19 01:07:26.171896 | orchestrator | 2026-03-19 01:07:26.171920 | orchestrator | 2026-03-19 01:07:26.171923 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:07:26.171927 | orchestrator | 2026-03-19 01:07:26.171931 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 01:07:26.171934 | orchestrator | Thursday 19 March 2026 01:05:34 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-03-19 01:07:26.171937 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:07:26.171941 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:07:26.171945 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:07:26.171948 | orchestrator | 2026-03-19 01:07:26.171952 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 01:07:26.171955 | orchestrator | Thursday 19 March 2026 01:05:34 +0000 (0:00:00.248) 0:00:00.518 ******** 2026-03-19 01:07:26.171959 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-19 01:07:26.171963 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-19 01:07:26.171966 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-19 01:07:26.171969 | orchestrator | 2026-03-19 01:07:26.171973 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-19 01:07:26.171976 | orchestrator | 2026-03-19 01:07:26.171979 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-19 01:07:26.171983 | orchestrator | Thursday 19 March 2026 01:05:34 +0000 (0:00:00.224) 0:00:00.742 ******** 2026-03-19 01:07:26.171986 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:07:26.171990 | orchestrator | 2026-03-19 01:07:26.171993 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-19 01:07:26.171996 | orchestrator | Thursday 19 March 2026 01:05:35 +0000 (0:00:00.435) 0:00:01.177 ******** 2026-03-19 01:07:26.172010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 01:07:26.172028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 01:07:26.172031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 01:07:26.172035 | orchestrator | 2026-03-19 01:07:26.172038 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-19 01:07:26.172041 | orchestrator | Thursday 19 March 2026 01:05:36 +0000 (0:00:01.339) 0:00:02.516 ******** 2026-03-19 01:07:26.172045 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-19 01:07:26.172048 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-19 01:07:26.172052 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 01:07:26.172055 | orchestrator | 2026-03-19 01:07:26.172060 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-19 01:07:26.172065 | orchestrator | Thursday 19 March 2026 01:05:37 +0000 (0:00:00.829) 0:00:03.345 ******** 2026-03-19 01:07:26.172070 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:07:26.172074 | orchestrator | 2026-03-19 01:07:26.172079 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-19 01:07:26.172083 | orchestrator | Thursday 19 March 2026 01:05:37 +0000 (0:00:00.456) 0:00:03.802 ******** 2026-03-19 01:07:26.172115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 01:07:26.172121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 01:07:26.172127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 01:07:26.172134 | orchestrator | 2026-03-19 01:07:26.172137 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-19 01:07:26.172140 | orchestrator | Thursday 19 March 2026 01:05:39 +0000 (0:00:01.370) 0:00:05.173 ******** 2026-03-19 01:07:26.172143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 01:07:26.172147 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:07:26.172150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 01:07:26.172153 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:07:26.172159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 01:07:26.172175 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:07:26.172178 | orchestrator | 2026-03-19 01:07:26.172181 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-19 01:07:26.172184 | orchestrator | Thursday 19 March 2026 01:05:39 +0000 (0:00:00.339) 0:00:05.513 ******** 2026-03-19 01:07:26.172188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 01:07:26.172193 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:07:26.172199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 01:07:26.172202 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:07:26.172205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 01:07:26.172208 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:07:26.172211 | orchestrator | 2026-03-19 01:07:26.172226 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-19 01:07:26.172229 | orchestrator | Thursday 19 March 2026 01:05:39 +0000 (0:00:00.515) 0:00:06.028 ******** 2026-03-19 01:07:26.172233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 01:07:26.172236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 01:07:26.172242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 01:07:26.172245 | orchestrator | 2026-03-19 01:07:26.172249 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-19 01:07:26.172254 | orchestrator | Thursday 19 March 2026 01:05:41 +0000 (0:00:01.351) 0:00:07.380 ******** 2026-03-19 01:07:26.172258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 01:07:26.172263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 01:07:26.172266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 01:07:26.172270 | orchestrator | 2026-03-19 01:07:26.172273 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-19 01:07:26.172276 | orchestrator | Thursday 19 March 2026 01:05:42 +0000 (0:00:01.214) 0:00:08.595 ******** 2026-03-19 01:07:26.172279 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:07:26.172282 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:07:26.172285 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:07:26.172288 | orchestrator | 2026-03-19 01:07:26.172291 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-19 01:07:26.172294 | orchestrator | Thursday 19 March 2026 01:05:42 +0000 (0:00:00.284) 0:00:08.879 ******** 2026-03-19 01:07:26.172298 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-19 01:07:26.172301 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-19 01:07:26.172304 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-19 01:07:26.172307 | orchestrator | 2026-03-19 01:07:26.172310 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-19 01:07:26.172313 | orchestrator | Thursday 19 March 2026 01:05:43 +0000 (0:00:01.096) 0:00:09.976 ******** 2026-03-19 01:07:26.172316 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-19 01:07:26.172320 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-19 01:07:26.172323 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-19 01:07:26.172326 | orchestrator | 2026-03-19 01:07:26.172329 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-19 01:07:26.172334 | orchestrator | Thursday 19 March 2026 01:05:44 +0000 (0:00:01.164) 0:00:11.140 ******** 2026-03-19 01:07:26.172339 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 01:07:26.172343 | orchestrator | 2026-03-19 01:07:26.172346 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-19 01:07:26.172349 | orchestrator | Thursday 19 March 2026 01:05:45 +0000 (0:00:00.944) 0:00:12.085 ******** 2026-03-19 01:07:26.172352 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-19 01:07:26.172355 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-19 01:07:26.172358 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:07:26.172362 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:07:26.172365 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:07:26.172368 | orchestrator | 2026-03-19 01:07:26.172371 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-19 01:07:26.172374 | orchestrator | Thursday 19 March 2026 01:05:46 +0000 (0:00:00.643) 0:00:12.728 ******** 2026-03-19 01:07:26.172377 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:07:26.172422 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:07:26.172425 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:07:26.172428 | orchestrator | 2026-03-19 01:07:26.172431 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-19 01:07:26.172447 | orchestrator | Thursday 19 March 2026 01:05:46 +0000 (0:00:00.313) 0:00:13.041 ******** 2026-03-19 01:07:26.172456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1360259, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0834043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1360259, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0834043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1360259, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0834043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1360296, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0921783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1360296, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0921783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1360296, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0921783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1360343, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1039274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1360343, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1039274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1360343, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1039274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1360285, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0887284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1360285, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0887284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1360285, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0887284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1360344, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.105837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1360344, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.105837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1360344, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.105837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1360271, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0859463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1360271, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0859463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1360271, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0859463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1360318, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0982292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1360318, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0982292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1360318, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0982292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1360331, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.101891, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1360331, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.101891, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1360331, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.101891, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1360255, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.081212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1360255, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.081212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1360255, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.081212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1360266, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0847182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1360266, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0847182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1360266, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0847182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1360290, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0907912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1360290, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0907912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1360290, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0907912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1360324, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0998836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1360324, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0998836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1360324, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0998836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1360336, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1035256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1360336, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1035256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1360336, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1035256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1360281, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0878026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1360281, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0878026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1360281, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0878026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1360329, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1012056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1360329, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1012056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1360329, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1012056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1360351, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.105837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1360351, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.105837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1360351, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.105837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1360319, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0993292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1360319, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0993292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1360319, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0993292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1360315, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0982292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1360315, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0982292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1360315, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0982292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1360311, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.096801, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1360311, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.096801, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1360311, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.096801, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1360326, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1001785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1360326, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1001785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1360326, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1001785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1360301, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0959039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1360301, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0959039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1360301, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0959039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1360334, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1021786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1360334, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1021786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1360334, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1021786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1360275, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0861783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1360275, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0861783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1360275, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.0861783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.172998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1360461, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1376112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1360461, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1376112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1360461, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1376112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1360387, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.117179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1360387, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.117179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1360387, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.117179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1360370, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1091826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1360370, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1091826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1360370, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1091826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1360407, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1192636, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1360407, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1192636, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1360407, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1192636, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1360359, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1069417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1360359, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1069417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1360359, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1069417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1360439, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1295161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1360439, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1295161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1360439, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1295161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1360409, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.126179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1360409, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.126179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1360409, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.126179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1360440, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.130179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1360440, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.130179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1360440, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.130179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1360458, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1361792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1360458, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1361792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1360458, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1361792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1360436, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1292446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1360436, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1292446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1360436, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1292446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1360401, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.118798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1360401, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.118798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1360401, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.118798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1360382, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1122148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1360382, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1122148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1360382, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1122148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1360399, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.117179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1360399, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.117179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1360399, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.117179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1360372, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.110195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1360372, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.110195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1360372, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.110195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1360403, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.118995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1360403, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.118995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1360403, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.118995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1360450, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1361792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1360450, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1361792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1360450, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1361792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1360444, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1334183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1360444, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1334183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1360444, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1334183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1360362, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1071787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1360362, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1071787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1360362, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1071787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1360364, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1088078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1360364, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1088078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1360364, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1088078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1360432, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1286042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1360432, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1286042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1360432, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1286042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1360442, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1310062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1360442, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1310062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1360442, 'dev': 102, 'nlink': 1, 'atime': 1773878551.0, 'mtime': 1773878551.0, 'ctime': 1773879527.1310062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 01:07:26.173448 | orchestrator | 2026-03-19 01:07:26.173451 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-19 01:07:26.173455 | orchestrator | Thursday 19 March 2026 01:06:22 +0000 (0:00:35.238) 0:00:48.280 ******** 2026-03-19 01:07:26.173460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 01:07:26.173463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 01:07:26.173467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 01:07:26.173470 | orchestrator | 2026-03-19 01:07:26.173473 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-19 01:07:26.173476 | orchestrator | Thursday 19 March 2026 01:06:23 +0000 (0:00:01.133) 0:00:49.413 ******** 2026-03-19 01:07:26.173479 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:07:26.173482 | orchestrator | 2026-03-19 01:07:26.173486 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-19 01:07:26.173489 | orchestrator | Thursday 19 March 2026 01:06:25 +0000 (0:00:02.184) 0:00:51.598 ******** 2026-03-19 01:07:26.173494 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:07:26.173497 | orchestrator | 2026-03-19 01:07:26.173500 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-19 01:07:26.173503 | orchestrator | Thursday 19 March 2026 01:06:27 +0000 (0:00:01.977) 0:00:53.576 ******** 2026-03-19 01:07:26.173506 | orchestrator | 2026-03-19 01:07:26.173509 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-19 01:07:26.173512 | orchestrator | Thursday 19 March 2026 01:06:27 +0000 (0:00:00.056) 0:00:53.632 ******** 2026-03-19 01:07:26.173515 | orchestrator | 2026-03-19 01:07:26.173518 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-19 01:07:26.173521 | orchestrator | Thursday 19 March 2026 01:06:27 +0000 (0:00:00.057) 0:00:53.690 ******** 2026-03-19 01:07:26.173524 | orchestrator | 2026-03-19 01:07:26.173528 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-19 01:07:26.173531 | orchestrator | Thursday 19 March 2026 01:06:27 +0000 (0:00:00.060) 0:00:53.751 ******** 2026-03-19 01:07:26.173534 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:07:26.173539 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:07:26.173542 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:07:26.173545 | orchestrator | 2026-03-19 01:07:26.173548 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-19 01:07:26.173551 | orchestrator | Thursday 19 March 2026 01:06:29 +0000 (0:00:01.491) 0:00:55.242 ******** 2026-03-19 01:07:26.173554 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:07:26.173557 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:07:26.173561 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-19 01:07:26.173564 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-19 01:07:26.173567 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:07:26.173570 | orchestrator | 2026-03-19 01:07:26.173573 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-19 01:07:26.173576 | orchestrator | Thursday 19 March 2026 01:06:55 +0000 (0:00:26.620) 0:01:21.863 ******** 2026-03-19 01:07:26.173579 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:07:26.173583 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:07:26.173586 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:07:26.173589 | orchestrator | 2026-03-19 01:07:26.173592 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-19 01:07:26.173595 | orchestrator | Thursday 19 March 2026 01:07:17 +0000 (0:00:22.234) 0:01:44.097 ******** 2026-03-19 01:07:26.173598 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:07:26.173601 | orchestrator | 2026-03-19 01:07:26.173604 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-19 01:07:26.173607 | orchestrator | Thursday 19 March 2026 01:07:20 +0000 (0:00:02.189) 0:01:46.287 ******** 2026-03-19 01:07:26.173610 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:07:26.173613 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:07:26.173617 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:07:26.173620 | orchestrator | 2026-03-19 01:07:26.173623 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-19 01:07:26.173626 | orchestrator | Thursday 19 March 2026 01:07:20 +0000 (0:00:00.310) 0:01:46.598 ******** 2026-03-19 01:07:26.173632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-19 01:07:26.173636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-19 01:07:26.173642 | orchestrator | 2026-03-19 01:07:26.173645 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-19 01:07:26.173648 | orchestrator | Thursday 19 March 2026 01:07:22 +0000 (0:00:02.229) 0:01:48.827 ******** 2026-03-19 01:07:26.173651 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:07:26.173654 | orchestrator | 2026-03-19 01:07:26.173657 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:07:26.173661 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 01:07:26.173665 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 01:07:26.173668 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 01:07:26.173671 | orchestrator | 2026-03-19 01:07:26.173674 | orchestrator | 2026-03-19 01:07:26.173677 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:07:26.173680 | orchestrator | Thursday 19 March 2026 01:07:22 +0000 (0:00:00.284) 0:01:49.112 ******** 2026-03-19 01:07:26.173683 | orchestrator | =============================================================================== 2026-03-19 01:07:26.173686 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 35.24s 2026-03-19 01:07:26.173689 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.62s 2026-03-19 01:07:26.173693 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 22.23s 2026-03-19 01:07:26.173696 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.23s 2026-03-19 01:07:26.173699 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.19s 2026-03-19 01:07:26.173702 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.18s 2026-03-19 01:07:26.173705 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 1.98s 2026-03-19 01:07:26.173708 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.49s 2026-03-19 01:07:26.173711 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.37s 2026-03-19 01:07:26.173714 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.35s 2026-03-19 01:07:26.173717 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.34s 2026-03-19 01:07:26.173720 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.21s 2026-03-19 01:07:26.173725 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.16s 2026-03-19 01:07:26.173728 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.13s 2026-03-19 01:07:26.173731 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.10s 2026-03-19 01:07:26.173734 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.94s 2026-03-19 01:07:26.173738 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.83s 2026-03-19 01:07:26.173741 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.64s 2026-03-19 01:07:26.173744 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.52s 2026-03-19 01:07:26.173747 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.46s 2026-03-19 01:07:26.173750 | orchestrator | 2026-03-19 01:07:26 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:26.174090 | orchestrator | 2026-03-19 01:07:26 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:26.174105 | orchestrator | 2026-03-19 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:29.227840 | orchestrator | 2026-03-19 01:07:29 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:29.229654 | orchestrator | 2026-03-19 01:07:29 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:29.229717 | orchestrator | 2026-03-19 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:32.275015 | orchestrator | 2026-03-19 01:07:32 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:32.276843 | orchestrator | 2026-03-19 01:07:32 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:32.276899 | orchestrator | 2026-03-19 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:35.325190 | orchestrator | 2026-03-19 01:07:35 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:35.325751 | orchestrator | 2026-03-19 01:07:35 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:35.326204 | orchestrator | 2026-03-19 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:38.369661 | orchestrator | 2026-03-19 01:07:38 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:38.371446 | orchestrator | 2026-03-19 01:07:38 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:38.371516 | orchestrator | 2026-03-19 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:41.419635 | orchestrator | 2026-03-19 01:07:41 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:41.421668 | orchestrator | 2026-03-19 01:07:41 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:41.421707 | orchestrator | 2026-03-19 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:44.469490 | orchestrator | 2026-03-19 01:07:44 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:44.472233 | orchestrator | 2026-03-19 01:07:44 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:44.472280 | orchestrator | 2026-03-19 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:47.510367 | orchestrator | 2026-03-19 01:07:47 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:47.513662 | orchestrator | 2026-03-19 01:07:47 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state STARTED 2026-03-19 01:07:47.513755 | orchestrator | 2026-03-19 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:50.558425 | orchestrator | 2026-03-19 01:07:50 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:50.560387 | orchestrator | 2026-03-19 01:07:50 | INFO  | Task 27cb2759-5442-4d37-a429-8498095da436 is in state SUCCESS 2026-03-19 01:07:50.562622 | orchestrator | 2026-03-19 01:07:50 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:07:50.563174 | orchestrator | 2026-03-19 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:53.617032 | orchestrator | 2026-03-19 01:07:53 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:53.617984 | orchestrator | 2026-03-19 01:07:53 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:07:53.618108 | orchestrator | 2026-03-19 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:56.664026 | orchestrator | 2026-03-19 01:07:56 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:56.669124 | orchestrator | 2026-03-19 01:07:56 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:07:56.669198 | orchestrator | 2026-03-19 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:07:59.713665 | orchestrator | 2026-03-19 01:07:59 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:07:59.716287 | orchestrator | 2026-03-19 01:07:59 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:07:59.716400 | orchestrator | 2026-03-19 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:02.752349 | orchestrator | 2026-03-19 01:08:02 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:02.753344 | orchestrator | 2026-03-19 01:08:02 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:02.753429 | orchestrator | 2026-03-19 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:05.805126 | orchestrator | 2026-03-19 01:08:05 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:05.805963 | orchestrator | 2026-03-19 01:08:05 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:05.807244 | orchestrator | 2026-03-19 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:08.843755 | orchestrator | 2026-03-19 01:08:08 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:08.845896 | orchestrator | 2026-03-19 01:08:08 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:08.845942 | orchestrator | 2026-03-19 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:11.891147 | orchestrator | 2026-03-19 01:08:11 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:11.892525 | orchestrator | 2026-03-19 01:08:11 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:11.892583 | orchestrator | 2026-03-19 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:14.926600 | orchestrator | 2026-03-19 01:08:14 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:14.928066 | orchestrator | 2026-03-19 01:08:14 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:14.928496 | orchestrator | 2026-03-19 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:17.972989 | orchestrator | 2026-03-19 01:08:17 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:17.973208 | orchestrator | 2026-03-19 01:08:17 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:17.973220 | orchestrator | 2026-03-19 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:21.075653 | orchestrator | 2026-03-19 01:08:21 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:21.078176 | orchestrator | 2026-03-19 01:08:21 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:21.078313 | orchestrator | 2026-03-19 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:24.123376 | orchestrator | 2026-03-19 01:08:24 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:24.123926 | orchestrator | 2026-03-19 01:08:24 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:24.124088 | orchestrator | 2026-03-19 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:27.161088 | orchestrator | 2026-03-19 01:08:27 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:27.161197 | orchestrator | 2026-03-19 01:08:27 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:27.161207 | orchestrator | 2026-03-19 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:30.202775 | orchestrator | 2026-03-19 01:08:30 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:30.209496 | orchestrator | 2026-03-19 01:08:30 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:30.209564 | orchestrator | 2026-03-19 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:33.239814 | orchestrator | 2026-03-19 01:08:33 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:33.241869 | orchestrator | 2026-03-19 01:08:33 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:33.241930 | orchestrator | 2026-03-19 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:36.288423 | orchestrator | 2026-03-19 01:08:36 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:36.290380 | orchestrator | 2026-03-19 01:08:36 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:36.290420 | orchestrator | 2026-03-19 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:39.328750 | orchestrator | 2026-03-19 01:08:39 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:39.328798 | orchestrator | 2026-03-19 01:08:39 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:39.328802 | orchestrator | 2026-03-19 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:42.357834 | orchestrator | 2026-03-19 01:08:42 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:42.358401 | orchestrator | 2026-03-19 01:08:42 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:42.358450 | orchestrator | 2026-03-19 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:45.390190 | orchestrator | 2026-03-19 01:08:45 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:45.390577 | orchestrator | 2026-03-19 01:08:45 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:45.390777 | orchestrator | 2026-03-19 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:48.432224 | orchestrator | 2026-03-19 01:08:48 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:48.433376 | orchestrator | 2026-03-19 01:08:48 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:48.433425 | orchestrator | 2026-03-19 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:51.501908 | orchestrator | 2026-03-19 01:08:51 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:51.503357 | orchestrator | 2026-03-19 01:08:51 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:51.503403 | orchestrator | 2026-03-19 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:54.539468 | orchestrator | 2026-03-19 01:08:54 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:54.540105 | orchestrator | 2026-03-19 01:08:54 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:54.540163 | orchestrator | 2026-03-19 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:08:57.574133 | orchestrator | 2026-03-19 01:08:57 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:08:57.576752 | orchestrator | 2026-03-19 01:08:57 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:08:57.576832 | orchestrator | 2026-03-19 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:00.621635 | orchestrator | 2026-03-19 01:09:00 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:00.623999 | orchestrator | 2026-03-19 01:09:00 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:00.624067 | orchestrator | 2026-03-19 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:03.663406 | orchestrator | 2026-03-19 01:09:03 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:03.664884 | orchestrator | 2026-03-19 01:09:03 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:03.664944 | orchestrator | 2026-03-19 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:06.695401 | orchestrator | 2026-03-19 01:09:06 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:06.696073 | orchestrator | 2026-03-19 01:09:06 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:06.696095 | orchestrator | 2026-03-19 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:09.729729 | orchestrator | 2026-03-19 01:09:09 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:09.732040 | orchestrator | 2026-03-19 01:09:09 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:09.732103 | orchestrator | 2026-03-19 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:12.777137 | orchestrator | 2026-03-19 01:09:12 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:12.779049 | orchestrator | 2026-03-19 01:09:12 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:12.779112 | orchestrator | 2026-03-19 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:15.828264 | orchestrator | 2026-03-19 01:09:15 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:15.830188 | orchestrator | 2026-03-19 01:09:15 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:15.830271 | orchestrator | 2026-03-19 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:18.866541 | orchestrator | 2026-03-19 01:09:18 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:18.867816 | orchestrator | 2026-03-19 01:09:18 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:18.867858 | orchestrator | 2026-03-19 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:21.915255 | orchestrator | 2026-03-19 01:09:21 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:21.919697 | orchestrator | 2026-03-19 01:09:21 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:21.919777 | orchestrator | 2026-03-19 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:24.966768 | orchestrator | 2026-03-19 01:09:24 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:24.969024 | orchestrator | 2026-03-19 01:09:24 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:24.969085 | orchestrator | 2026-03-19 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:28.010077 | orchestrator | 2026-03-19 01:09:28 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:28.011072 | orchestrator | 2026-03-19 01:09:28 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:28.011230 | orchestrator | 2026-03-19 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:31.065806 | orchestrator | 2026-03-19 01:09:31 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:31.065853 | orchestrator | 2026-03-19 01:09:31 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:31.065857 | orchestrator | 2026-03-19 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:34.106744 | orchestrator | 2026-03-19 01:09:34 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:34.108460 | orchestrator | 2026-03-19 01:09:34 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:34.108520 | orchestrator | 2026-03-19 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:37.144506 | orchestrator | 2026-03-19 01:09:37 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:37.146399 | orchestrator | 2026-03-19 01:09:37 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:37.146439 | orchestrator | 2026-03-19 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:40.189027 | orchestrator | 2026-03-19 01:09:40 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:40.191595 | orchestrator | 2026-03-19 01:09:40 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:40.191681 | orchestrator | 2026-03-19 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:43.229030 | orchestrator | 2026-03-19 01:09:43 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:43.230998 | orchestrator | 2026-03-19 01:09:43 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:43.231042 | orchestrator | 2026-03-19 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:46.267393 | orchestrator | 2026-03-19 01:09:46 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:46.269227 | orchestrator | 2026-03-19 01:09:46 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:46.269274 | orchestrator | 2026-03-19 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:49.310908 | orchestrator | 2026-03-19 01:09:49 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:49.312436 | orchestrator | 2026-03-19 01:09:49 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:49.312491 | orchestrator | 2026-03-19 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:52.360877 | orchestrator | 2026-03-19 01:09:52 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:52.361167 | orchestrator | 2026-03-19 01:09:52 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:52.361196 | orchestrator | 2026-03-19 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:55.411455 | orchestrator | 2026-03-19 01:09:55 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:55.413223 | orchestrator | 2026-03-19 01:09:55 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:55.413564 | orchestrator | 2026-03-19 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:09:58.458779 | orchestrator | 2026-03-19 01:09:58 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:09:58.460492 | orchestrator | 2026-03-19 01:09:58 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:09:58.460532 | orchestrator | 2026-03-19 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:01.524990 | orchestrator | 2026-03-19 01:10:01 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:01.526910 | orchestrator | 2026-03-19 01:10:01 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:01.526957 | orchestrator | 2026-03-19 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:04.572856 | orchestrator | 2026-03-19 01:10:04 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:04.575130 | orchestrator | 2026-03-19 01:10:04 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:04.575460 | orchestrator | 2026-03-19 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:07.622158 | orchestrator | 2026-03-19 01:10:07 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:07.623537 | orchestrator | 2026-03-19 01:10:07 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:07.623588 | orchestrator | 2026-03-19 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:10.667894 | orchestrator | 2026-03-19 01:10:10 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:10.670568 | orchestrator | 2026-03-19 01:10:10 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:10.670724 | orchestrator | 2026-03-19 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:13.715097 | orchestrator | 2026-03-19 01:10:13 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:13.716037 | orchestrator | 2026-03-19 01:10:13 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:13.716135 | orchestrator | 2026-03-19 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:16.757004 | orchestrator | 2026-03-19 01:10:16 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:16.759886 | orchestrator | 2026-03-19 01:10:16 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:16.760016 | orchestrator | 2026-03-19 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:19.790338 | orchestrator | 2026-03-19 01:10:19 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:19.791035 | orchestrator | 2026-03-19 01:10:19 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:19.791454 | orchestrator | 2026-03-19 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:22.831510 | orchestrator | 2026-03-19 01:10:22 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:22.833193 | orchestrator | 2026-03-19 01:10:22 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:22.833298 | orchestrator | 2026-03-19 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:25.873347 | orchestrator | 2026-03-19 01:10:25 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:25.873584 | orchestrator | 2026-03-19 01:10:25 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:25.873594 | orchestrator | 2026-03-19 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:28.902859 | orchestrator | 2026-03-19 01:10:28 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:28.904806 | orchestrator | 2026-03-19 01:10:28 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:28.904855 | orchestrator | 2026-03-19 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:31.942849 | orchestrator | 2026-03-19 01:10:31 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:31.943180 | orchestrator | 2026-03-19 01:10:31 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:31.943369 | orchestrator | 2026-03-19 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:34.993175 | orchestrator | 2026-03-19 01:10:34 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:34.994690 | orchestrator | 2026-03-19 01:10:34 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:34.994801 | orchestrator | 2026-03-19 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:38.043871 | orchestrator | 2026-03-19 01:10:38 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:38.045895 | orchestrator | 2026-03-19 01:10:38 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:38.046552 | orchestrator | 2026-03-19 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:41.120358 | orchestrator | 2026-03-19 01:10:41 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:41.121262 | orchestrator | 2026-03-19 01:10:41 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:41.121304 | orchestrator | 2026-03-19 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:44.168135 | orchestrator | 2026-03-19 01:10:44 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:44.170249 | orchestrator | 2026-03-19 01:10:44 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:44.170309 | orchestrator | 2026-03-19 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:47.208056 | orchestrator | 2026-03-19 01:10:47 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:47.209602 | orchestrator | 2026-03-19 01:10:47 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:47.209635 | orchestrator | 2026-03-19 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:50.237755 | orchestrator | 2026-03-19 01:10:50 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:50.239145 | orchestrator | 2026-03-19 01:10:50 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:50.239193 | orchestrator | 2026-03-19 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:53.280565 | orchestrator | 2026-03-19 01:10:53 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:53.281106 | orchestrator | 2026-03-19 01:10:53 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:53.281145 | orchestrator | 2026-03-19 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:56.321727 | orchestrator | 2026-03-19 01:10:56 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:56.322609 | orchestrator | 2026-03-19 01:10:56 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:56.322777 | orchestrator | 2026-03-19 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:10:59.363616 | orchestrator | 2026-03-19 01:10:59 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:10:59.366366 | orchestrator | 2026-03-19 01:10:59 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:10:59.367000 | orchestrator | 2026-03-19 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:02.410453 | orchestrator | 2026-03-19 01:11:02 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:02.412321 | orchestrator | 2026-03-19 01:11:02 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:02.412778 | orchestrator | 2026-03-19 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:05.454314 | orchestrator | 2026-03-19 01:11:05 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:05.456345 | orchestrator | 2026-03-19 01:11:05 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:05.456689 | orchestrator | 2026-03-19 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:08.489462 | orchestrator | 2026-03-19 01:11:08 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:08.491525 | orchestrator | 2026-03-19 01:11:08 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:08.491570 | orchestrator | 2026-03-19 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:11.520022 | orchestrator | 2026-03-19 01:11:11 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:11.520481 | orchestrator | 2026-03-19 01:11:11 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:11.520545 | orchestrator | 2026-03-19 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:14.548633 | orchestrator | 2026-03-19 01:11:14 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:14.550350 | orchestrator | 2026-03-19 01:11:14 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:14.550419 | orchestrator | 2026-03-19 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:17.575081 | orchestrator | 2026-03-19 01:11:17 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:17.575644 | orchestrator | 2026-03-19 01:11:17 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:17.575788 | orchestrator | 2026-03-19 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:20.627069 | orchestrator | 2026-03-19 01:11:20 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:20.630201 | orchestrator | 2026-03-19 01:11:20 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:20.631346 | orchestrator | 2026-03-19 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:23.670959 | orchestrator | 2026-03-19 01:11:23 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:23.673158 | orchestrator | 2026-03-19 01:11:23 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:23.673211 | orchestrator | 2026-03-19 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:26.718067 | orchestrator | 2026-03-19 01:11:26 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:26.720842 | orchestrator | 2026-03-19 01:11:26 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:26.720920 | orchestrator | 2026-03-19 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:29.766510 | orchestrator | 2026-03-19 01:11:29 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:29.767252 | orchestrator | 2026-03-19 01:11:29 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:29.767279 | orchestrator | 2026-03-19 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:32.811864 | orchestrator | 2026-03-19 01:11:32 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:32.812064 | orchestrator | 2026-03-19 01:11:32 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:32.812076 | orchestrator | 2026-03-19 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:35.865452 | orchestrator | 2026-03-19 01:11:35 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:35.868485 | orchestrator | 2026-03-19 01:11:35 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:35.868535 | orchestrator | 2026-03-19 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:38.925017 | orchestrator | 2026-03-19 01:11:38 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:38.928614 | orchestrator | 2026-03-19 01:11:38 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:38.928747 | orchestrator | 2026-03-19 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:41.978216 | orchestrator | 2026-03-19 01:11:41 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:41.982252 | orchestrator | 2026-03-19 01:11:41 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:41.982989 | orchestrator | 2026-03-19 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:45.029668 | orchestrator | 2026-03-19 01:11:45 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:45.031178 | orchestrator | 2026-03-19 01:11:45 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:45.031222 | orchestrator | 2026-03-19 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:48.080066 | orchestrator | 2026-03-19 01:11:48 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:48.082361 | orchestrator | 2026-03-19 01:11:48 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:48.082418 | orchestrator | 2026-03-19 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:51.113789 | orchestrator | 2026-03-19 01:11:51 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:51.113839 | orchestrator | 2026-03-19 01:11:51 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:51.113917 | orchestrator | 2026-03-19 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:54.164089 | orchestrator | 2026-03-19 01:11:54 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state STARTED 2026-03-19 01:11:54.167047 | orchestrator | 2026-03-19 01:11:54 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:54.167113 | orchestrator | 2026-03-19 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:57.213884 | orchestrator | 2026-03-19 01:11:57.213945 | orchestrator | 2026-03-19 01:11:57.213953 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:11:57.213959 | orchestrator | 2026-03-19 01:11:57.213965 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 01:11:57.213971 | orchestrator | Thursday 19 March 2026 01:04:40 +0000 (0:00:00.173) 0:00:00.173 ******** 2026-03-19 01:11:57.213976 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:11:57.213982 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:11:57.213987 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:11:57.213992 | orchestrator | 2026-03-19 01:11:57.213997 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 01:11:57.214002 | orchestrator | Thursday 19 March 2026 01:04:40 +0000 (0:00:00.330) 0:00:00.504 ******** 2026-03-19 01:11:57.214006 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-19 01:11:57.214054 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-19 01:11:57.214066 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-19 01:11:57.214071 | orchestrator | 2026-03-19 01:11:57.214076 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-19 01:11:57.214081 | orchestrator | 2026-03-19 01:11:57.214086 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-19 01:11:57.214090 | orchestrator | Thursday 19 March 2026 01:04:41 +0000 (0:00:00.476) 0:00:00.980 ******** 2026-03-19 01:11:57.214095 | orchestrator | 2026-03-19 01:11:57.214100 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-19 01:11:57.214105 | orchestrator | 2026-03-19 01:11:57.214111 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-19 01:11:57.214115 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:11:57.214120 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:11:57.214125 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:11:57.214130 | orchestrator | 2026-03-19 01:11:57.214135 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:11:57.214141 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:11:57.214147 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:11:57.214152 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:11:57.214157 | orchestrator | 2026-03-19 01:11:57.214162 | orchestrator | 2026-03-19 01:11:57.214167 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:11:57.214172 | orchestrator | Thursday 19 March 2026 01:07:49 +0000 (0:03:08.015) 0:03:08.996 ******** 2026-03-19 01:11:57.214177 | orchestrator | =============================================================================== 2026-03-19 01:11:57.214182 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 188.02s 2026-03-19 01:11:57.214187 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2026-03-19 01:11:57.214192 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-19 01:11:57.214197 | orchestrator | 2026-03-19 01:11:57.214201 | orchestrator | 2026-03-19 01:11:57.214206 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:11:57.214210 | orchestrator | 2026-03-19 01:11:57.214236 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-19 01:11:57.214242 | orchestrator | Thursday 19 March 2026 01:03:55 +0000 (0:00:00.392) 0:00:00.392 ******** 2026-03-19 01:11:57.214247 | orchestrator | changed: [testbed-manager] 2026-03-19 01:11:57.214252 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.214258 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:11:57.214263 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:11:57.214268 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:11:57.214273 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:11:57.214278 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:11:57.214283 | orchestrator | 2026-03-19 01:11:57.214288 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 01:11:57.214294 | orchestrator | Thursday 19 March 2026 01:03:56 +0000 (0:00:00.984) 0:00:01.377 ******** 2026-03-19 01:11:57.214299 | orchestrator | changed: [testbed-manager] 2026-03-19 01:11:57.214304 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.214309 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:11:57.214314 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:11:57.214318 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:11:57.214323 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:11:57.214328 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:11:57.214333 | orchestrator | 2026-03-19 01:11:57.214338 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 01:11:57.214343 | orchestrator | Thursday 19 March 2026 01:03:56 +0000 (0:00:00.876) 0:00:02.253 ******** 2026-03-19 01:11:57.214348 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-19 01:11:57.214354 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-19 01:11:57.214359 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-19 01:11:57.214364 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-19 01:11:57.214369 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-19 01:11:57.214374 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-19 01:11:57.214390 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-19 01:11:57.214456 | orchestrator | 2026-03-19 01:11:57.214464 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-19 01:11:57.214470 | orchestrator | 2026-03-19 01:11:57.214476 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-19 01:11:57.214481 | orchestrator | Thursday 19 March 2026 01:03:57 +0000 (0:00:00.737) 0:00:02.991 ******** 2026-03-19 01:11:57.214487 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:11:57.214493 | orchestrator | 2026-03-19 01:11:57.214514 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-19 01:11:57.214520 | orchestrator | Thursday 19 March 2026 01:03:58 +0000 (0:00:01.168) 0:00:04.159 ******** 2026-03-19 01:11:57.214525 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-19 01:11:57.214532 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-19 01:11:57.214538 | orchestrator | 2026-03-19 01:11:57.214544 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-19 01:11:57.214550 | orchestrator | Thursday 19 March 2026 01:04:03 +0000 (0:00:04.645) 0:00:08.804 ******** 2026-03-19 01:11:57.214567 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 01:11:57.214577 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 01:11:57.214583 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.214588 | orchestrator | 2026-03-19 01:11:57.214594 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-19 01:11:57.214599 | orchestrator | Thursday 19 March 2026 01:04:07 +0000 (0:00:04.039) 0:00:12.844 ******** 2026-03-19 01:11:57.214675 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.214681 | orchestrator | 2026-03-19 01:11:57.214687 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-19 01:11:57.214811 | orchestrator | Thursday 19 March 2026 01:04:08 +0000 (0:00:00.711) 0:00:13.556 ******** 2026-03-19 01:11:57.214821 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.214827 | orchestrator | 2026-03-19 01:11:57.214833 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-19 01:11:57.214839 | orchestrator | Thursday 19 March 2026 01:04:09 +0000 (0:00:01.239) 0:00:14.795 ******** 2026-03-19 01:11:57.214845 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.214850 | orchestrator | 2026-03-19 01:11:57.214856 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-19 01:11:57.214861 | orchestrator | Thursday 19 March 2026 01:04:13 +0000 (0:00:03.585) 0:00:18.381 ******** 2026-03-19 01:11:57.214866 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.214871 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.214877 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.214882 | orchestrator | 2026-03-19 01:11:57.214887 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-19 01:11:57.214892 | orchestrator | Thursday 19 March 2026 01:04:14 +0000 (0:00:01.226) 0:00:19.607 ******** 2026-03-19 01:11:57.214898 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:11:57.214903 | orchestrator | 2026-03-19 01:11:57.214909 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-19 01:11:57.214914 | orchestrator | Thursday 19 March 2026 01:04:44 +0000 (0:00:30.017) 0:00:49.624 ******** 2026-03-19 01:11:57.214919 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.214924 | orchestrator | 2026-03-19 01:11:57.214929 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-19 01:11:57.214933 | orchestrator | Thursday 19 March 2026 01:04:58 +0000 (0:00:14.366) 0:01:03.991 ******** 2026-03-19 01:11:57.214938 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:11:57.214944 | orchestrator | 2026-03-19 01:11:57.214948 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-19 01:11:57.214954 | orchestrator | Thursday 19 March 2026 01:05:11 +0000 (0:00:12.470) 0:01:16.461 ******** 2026-03-19 01:11:57.214959 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:11:57.214965 | orchestrator | 2026-03-19 01:11:57.214970 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-19 01:11:57.214975 | orchestrator | Thursday 19 March 2026 01:05:12 +0000 (0:00:00.837) 0:01:17.299 ******** 2026-03-19 01:11:57.214981 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.214986 | orchestrator | 2026-03-19 01:11:57.214991 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-19 01:11:57.214997 | orchestrator | Thursday 19 March 2026 01:05:12 +0000 (0:00:00.573) 0:01:17.872 ******** 2026-03-19 01:11:57.215003 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:11:57.215008 | orchestrator | 2026-03-19 01:11:57.215013 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-19 01:11:57.215018 | orchestrator | Thursday 19 March 2026 01:05:13 +0000 (0:00:00.799) 0:01:18.671 ******** 2026-03-19 01:11:57.215023 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:11:57.215029 | orchestrator | 2026-03-19 01:11:57.215035 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-19 01:11:57.215040 | orchestrator | Thursday 19 March 2026 01:05:30 +0000 (0:00:16.690) 0:01:35.362 ******** 2026-03-19 01:11:57.215046 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.215051 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215057 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215062 | orchestrator | 2026-03-19 01:11:57.215090 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-19 01:11:57.215106 | orchestrator | 2026-03-19 01:11:57.215111 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-19 01:11:57.215116 | orchestrator | Thursday 19 March 2026 01:05:30 +0000 (0:00:00.536) 0:01:35.898 ******** 2026-03-19 01:11:57.215129 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:11:57.215134 | orchestrator | 2026-03-19 01:11:57.215139 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-19 01:11:57.215143 | orchestrator | Thursday 19 March 2026 01:05:31 +0000 (0:00:00.674) 0:01:36.572 ******** 2026-03-19 01:11:57.215148 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215162 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215167 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.215172 | orchestrator | 2026-03-19 01:11:57.215179 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-19 01:11:57.215185 | orchestrator | Thursday 19 March 2026 01:05:33 +0000 (0:00:02.076) 0:01:38.649 ******** 2026-03-19 01:11:57.215191 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215198 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215204 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.215210 | orchestrator | 2026-03-19 01:11:57.215228 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-19 01:11:57.215235 | orchestrator | Thursday 19 March 2026 01:05:35 +0000 (0:00:02.241) 0:01:40.890 ******** 2026-03-19 01:11:57.215242 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.215248 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215253 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215258 | orchestrator | 2026-03-19 01:11:57.215263 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-19 01:11:57.215270 | orchestrator | Thursday 19 March 2026 01:05:35 +0000 (0:00:00.402) 0:01:41.293 ******** 2026-03-19 01:11:57.215277 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-19 01:11:57.215282 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215288 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-19 01:11:57.215293 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215298 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-19 01:11:57.215305 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-19 01:11:57.215311 | orchestrator | 2026-03-19 01:11:57.215317 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-19 01:11:57.215323 | orchestrator | Thursday 19 March 2026 01:05:43 +0000 (0:00:07.311) 0:01:48.604 ******** 2026-03-19 01:11:57.215329 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.215343 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215356 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215362 | orchestrator | 2026-03-19 01:11:57.215367 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-19 01:11:57.215374 | orchestrator | Thursday 19 March 2026 01:05:43 +0000 (0:00:00.360) 0:01:48.964 ******** 2026-03-19 01:11:57.215380 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-19 01:11:57.215391 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.215397 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-19 01:11:57.215403 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215409 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-19 01:11:57.215414 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215420 | orchestrator | 2026-03-19 01:11:57.215426 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-19 01:11:57.215432 | orchestrator | Thursday 19 March 2026 01:05:44 +0000 (0:00:01.047) 0:01:50.012 ******** 2026-03-19 01:11:57.215511 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215518 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215523 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.215529 | orchestrator | 2026-03-19 01:11:57.215535 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-19 01:11:57.215540 | orchestrator | Thursday 19 March 2026 01:05:45 +0000 (0:00:00.492) 0:01:50.504 ******** 2026-03-19 01:11:57.215546 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215559 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215564 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.215570 | orchestrator | 2026-03-19 01:11:57.215576 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-19 01:11:57.215582 | orchestrator | Thursday 19 March 2026 01:05:46 +0000 (0:00:00.911) 0:01:51.416 ******** 2026-03-19 01:11:57.215588 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215594 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215600 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.215605 | orchestrator | 2026-03-19 01:11:57.215611 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-19 01:11:57.215616 | orchestrator | Thursday 19 March 2026 01:05:48 +0000 (0:00:02.053) 0:01:53.469 ******** 2026-03-19 01:11:57.215636 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215642 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215647 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:11:57.215652 | orchestrator | 2026-03-19 01:11:57.215657 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-19 01:11:57.215663 | orchestrator | Thursday 19 March 2026 01:06:08 +0000 (0:00:20.209) 0:02:13.679 ******** 2026-03-19 01:11:57.215669 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215674 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215680 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:11:57.215686 | orchestrator | 2026-03-19 01:11:57.215691 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-19 01:11:57.215696 | orchestrator | Thursday 19 March 2026 01:06:20 +0000 (0:00:12.187) 0:02:25.866 ******** 2026-03-19 01:11:57.215714 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:11:57.215719 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215724 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215729 | orchestrator | 2026-03-19 01:11:57.215734 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-19 01:11:57.215740 | orchestrator | Thursday 19 March 2026 01:06:21 +0000 (0:00:00.756) 0:02:26.623 ******** 2026-03-19 01:11:57.215745 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215750 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215755 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.215761 | orchestrator | 2026-03-19 01:11:57.215766 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-19 01:11:57.215772 | orchestrator | Thursday 19 March 2026 01:06:32 +0000 (0:00:11.604) 0:02:38.227 ******** 2026-03-19 01:11:57.215777 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.215782 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215787 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215793 | orchestrator | 2026-03-19 01:11:57.215799 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-19 01:11:57.215810 | orchestrator | Thursday 19 March 2026 01:06:33 +0000 (0:00:01.007) 0:02:39.235 ******** 2026-03-19 01:11:57.215816 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.215822 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.215829 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.215835 | orchestrator | 2026-03-19 01:11:57.215842 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-19 01:11:57.215848 | orchestrator | 2026-03-19 01:11:57.215855 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-19 01:11:57.215873 | orchestrator | Thursday 19 March 2026 01:06:34 +0000 (0:00:00.271) 0:02:39.507 ******** 2026-03-19 01:11:57.215880 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:11:57.215888 | orchestrator | 2026-03-19 01:11:57.215895 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-19 01:11:57.215902 | orchestrator | Thursday 19 March 2026 01:06:34 +0000 (0:00:00.557) 0:02:40.064 ******** 2026-03-19 01:11:57.215917 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-19 01:11:57.215924 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-19 01:11:57.215931 | orchestrator | 2026-03-19 01:11:57.215938 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-19 01:11:57.215945 | orchestrator | Thursday 19 March 2026 01:06:37 +0000 (0:00:02.799) 0:02:42.864 ******** 2026-03-19 01:11:57.215951 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-19 01:11:57.215959 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-19 01:11:57.215966 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-19 01:11:57.215973 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-19 01:11:57.215980 | orchestrator | 2026-03-19 01:11:57.215987 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-19 01:11:57.215993 | orchestrator | Thursday 19 March 2026 01:06:43 +0000 (0:00:06.085) 0:02:48.949 ******** 2026-03-19 01:11:57.216000 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 01:11:57.216006 | orchestrator | 2026-03-19 01:11:57.216013 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-19 01:11:57.216019 | orchestrator | Thursday 19 March 2026 01:06:46 +0000 (0:00:03.122) 0:02:52.072 ******** 2026-03-19 01:11:57.216025 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-19 01:11:57.216031 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 01:11:57.216037 | orchestrator | 2026-03-19 01:11:57.216042 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-19 01:11:57.216047 | orchestrator | Thursday 19 March 2026 01:06:50 +0000 (0:00:03.784) 0:02:55.856 ******** 2026-03-19 01:11:57.216051 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 01:11:57.216082 | orchestrator | 2026-03-19 01:11:57.216089 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-19 01:11:57.216095 | orchestrator | Thursday 19 March 2026 01:06:54 +0000 (0:00:03.734) 0:02:59.591 ******** 2026-03-19 01:11:57.216100 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-19 01:11:57.216107 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-19 01:11:57.216113 | orchestrator | 2026-03-19 01:11:57.216119 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-19 01:11:57.216125 | orchestrator | Thursday 19 March 2026 01:07:01 +0000 (0:00:07.466) 0:03:07.057 ******** 2026-03-19 01:11:57.216135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 01:11:57.216155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.216168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 01:11:57.216187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 01:11:57.216194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.216201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.216212 | orchestrator | 2026-03-19 01:11:57.216247 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-19 01:11:57.216255 | orchestrator | Thursday 19 March 2026 01:07:03 +0000 (0:00:01.834) 0:03:08.892 ******** 2026-03-19 01:11:57.216260 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.216265 | orchestrator | 2026-03-19 01:11:57.216270 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-19 01:11:57.216275 | orchestrator | Thursday 19 March 2026 01:07:03 +0000 (0:00:00.125) 0:03:09.018 ******** 2026-03-19 01:11:57.216280 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.216284 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.216290 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.216294 | orchestrator | 2026-03-19 01:11:57.216303 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-19 01:11:57.216308 | orchestrator | Thursday 19 March 2026 01:07:04 +0000 (0:00:00.287) 0:03:09.305 ******** 2026-03-19 01:11:57.216313 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 01:11:57.216318 | orchestrator | 2026-03-19 01:11:57.216323 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-19 01:11:57.216328 | orchestrator | Thursday 19 March 2026 01:07:04 +0000 (0:00:00.718) 0:03:10.024 ******** 2026-03-19 01:11:57.216333 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.216338 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.216343 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.216348 | orchestrator | 2026-03-19 01:11:57.216353 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-19 01:11:57.216358 | orchestrator | Thursday 19 March 2026 01:07:04 +0000 (0:00:00.271) 0:03:10.295 ******** 2026-03-19 01:11:57.216363 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:11:57.216368 | orchestrator | 2026-03-19 01:11:57.216374 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-19 01:11:57.216379 | orchestrator | Thursday 19 March 2026 01:07:05 +0000 (0:00:00.662) 0:03:10.958 ******** 2026-03-19 01:11:57.216385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 01:11:57.216391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 01:11:57.216415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 01:11:57.216422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.216427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.216432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.216437 | orchestrator | 2026-03-19 01:11:57.216443 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-19 01:11:57.216448 | orchestrator | Thursday 19 March 2026 01:07:07 +0000 (0:00:02.054) 0:03:13.012 ******** 2026-03-19 01:11:57.216463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 01:11:57.216474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.216480 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.216485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 01:11:57.216491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.216496 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.216501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 01:11:57.216512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.216518 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.216522 | orchestrator | 2026-03-19 01:11:57.216527 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-19 01:11:57.216532 | orchestrator | Thursday 19 March 2026 01:07:08 +0000 (0:00:00.551) 0:03:13.563 ******** 2026-03-19 01:11:57.216578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': '2026-03-19 01:11:57 | INFO  | Task b865d8ef-3404-4953-b25c-662c3cc056c5 is in state SUCCESS 2026-03-19 01:11:57.217183 | orchestrator | nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 01:11:57.217221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.217234 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.217241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 01:11:57.217251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.217257 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.217268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 01:11:57.217273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.217279 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.217284 | orchestrator | 2026-03-19 01:11:57.217289 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-19 01:11:57.217296 | orchestrator | Thursday 19 March 2026 01:07:09 +0000 (0:00:00.916) 0:03:14.480 ******** 2026-03-19 01:11:57.217305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 01:11:57.217314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 01:11:57.217324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 01:11:57.217330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.217340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.217346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.217351 | orchestrator | 2026-03-19 01:11:57.217356 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-19 01:11:57.217361 | orchestrator | Thursday 19 March 2026 01:07:11 +0000 (0:00:02.495) 0:03:16.975 ******** 2026-03-19 01:11:57.217372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 01:11:57.217379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 01:11:57.217388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 01:11:57.217393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.217400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.217409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.217414 | orchestrator | 2026-03-19 01:11:57.217420 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-19 01:11:57.217425 | orchestrator | Thursday 19 March 2026 01:07:16 +0000 (0:00:05.239) 0:03:22.215 ******** 2026-03-19 01:11:57.217430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 01:11:57.217439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.217445 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.217450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 01:11:57.217459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.217467 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.217473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 01:11:57.217484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.217490 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.217495 | orchestrator | 2026-03-19 01:11:57.217501 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-19 01:11:57.217506 | orchestrator | Thursday 19 March 2026 01:07:17 +0000 (0:00:00.617) 0:03:22.833 ******** 2026-03-19 01:11:57.217512 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:11:57.217517 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.217523 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:11:57.217528 | orchestrator | 2026-03-19 01:11:57.217534 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-19 01:11:57.217539 | orchestrator | Thursday 19 March 2026 01:07:19 +0000 (0:00:01.735) 0:03:24.569 ******** 2026-03-19 01:11:57.217544 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.217549 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.217554 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.217559 | orchestrator | 2026-03-19 01:11:57.217564 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-19 01:11:57.217570 | orchestrator | Thursday 19 March 2026 01:07:19 +0000 (0:00:00.324) 0:03:24.893 ******** 2026-03-19 01:11:57.217581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 01:11:57.217592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 01:11:57.217602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 01:11:57.217608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.217613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.217621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.217626 | orchestrator | 2026-03-19 01:11:57.217632 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-19 01:11:57.217640 | orchestrator | Thursday 19 March 2026 01:07:21 +0000 (0:00:01.940) 0:03:26.834 ******** 2026-03-19 01:11:57.217649 | orchestrator | 2026-03-19 01:11:57.217654 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-19 01:11:57.217659 | orchestrator | Thursday 19 March 2026 01:07:21 +0000 (0:00:00.131) 0:03:26.966 ******** 2026-03-19 01:11:57.217665 | orchestrator | 2026-03-19 01:11:57.217670 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-19 01:11:57.217675 | orchestrator | Thursday 19 March 2026 01:07:21 +0000 (0:00:00.134) 0:03:27.100 ******** 2026-03-19 01:11:57.217681 | orchestrator | 2026-03-19 01:11:57.217686 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-19 01:11:57.217692 | orchestrator | Thursday 19 March 2026 01:07:22 +0000 (0:00:00.267) 0:03:27.368 ******** 2026-03-19 01:11:57.217709 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.217715 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:11:57.217720 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:11:57.217725 | orchestrator | 2026-03-19 01:11:57.217730 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-19 01:11:57.217735 | orchestrator | Thursday 19 March 2026 01:07:42 +0000 (0:00:19.976) 0:03:47.344 ******** 2026-03-19 01:11:57.217740 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.217745 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:11:57.217750 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:11:57.217755 | orchestrator | 2026-03-19 01:11:57.217760 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-19 01:11:57.217765 | orchestrator | 2026-03-19 01:11:57.217770 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-19 01:11:57.217775 | orchestrator | Thursday 19 March 2026 01:07:51 +0000 (0:00:09.887) 0:03:57.232 ******** 2026-03-19 01:11:57.217780 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:11:57.217786 | orchestrator | 2026-03-19 01:11:57.217791 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-19 01:11:57.217796 | orchestrator | Thursday 19 March 2026 01:07:53 +0000 (0:00:01.241) 0:03:58.474 ******** 2026-03-19 01:11:57.217802 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.217807 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.217812 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.217817 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.217822 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.217827 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.217833 | orchestrator | 2026-03-19 01:11:57.217838 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-19 01:11:57.217843 | orchestrator | Thursday 19 March 2026 01:07:53 +0000 (0:00:00.708) 0:03:59.182 ******** 2026-03-19 01:11:57.217849 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.217854 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.217860 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.217865 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 01:11:57.217871 | orchestrator | 2026-03-19 01:11:57.217877 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-19 01:11:57.217882 | orchestrator | Thursday 19 March 2026 01:07:54 +0000 (0:00:00.766) 0:03:59.949 ******** 2026-03-19 01:11:57.217888 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-19 01:11:57.217895 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-19 01:11:57.217900 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-19 01:11:57.217905 | orchestrator | 2026-03-19 01:11:57.217911 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-19 01:11:57.217916 | orchestrator | Thursday 19 March 2026 01:07:55 +0000 (0:00:01.128) 0:04:01.077 ******** 2026-03-19 01:11:57.217922 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-19 01:11:57.217927 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-19 01:11:57.217937 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-19 01:11:57.217942 | orchestrator | 2026-03-19 01:11:57.217948 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-19 01:11:57.217953 | orchestrator | Thursday 19 March 2026 01:07:56 +0000 (0:00:01.115) 0:04:02.192 ******** 2026-03-19 01:11:57.217958 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-19 01:11:57.217963 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.217968 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-19 01:11:57.217973 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.217978 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-19 01:11:57.217983 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.217989 | orchestrator | 2026-03-19 01:11:57.217994 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-19 01:11:57.217999 | orchestrator | Thursday 19 March 2026 01:07:57 +0000 (0:00:00.508) 0:04:02.700 ******** 2026-03-19 01:11:57.218004 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 01:11:57.218009 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 01:11:57.218048 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.218057 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 01:11:57.218062 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 01:11:57.218068 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-19 01:11:57.218073 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-19 01:11:57.218078 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.218084 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-19 01:11:57.218090 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 01:11:57.218102 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 01:11:57.218107 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.218112 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-19 01:11:57.218117 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-19 01:11:57.218122 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-19 01:11:57.218128 | orchestrator | 2026-03-19 01:11:57.218133 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-19 01:11:57.218138 | orchestrator | Thursday 19 March 2026 01:07:58 +0000 (0:00:01.067) 0:04:03.768 ******** 2026-03-19 01:11:57.218143 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.218148 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.218153 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.218158 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:11:57.218163 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:11:57.218168 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:11:57.218173 | orchestrator | 2026-03-19 01:11:57.218178 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-19 01:11:57.218184 | orchestrator | Thursday 19 March 2026 01:07:59 +0000 (0:00:01.085) 0:04:04.853 ******** 2026-03-19 01:11:57.218189 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.218194 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.218200 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.218205 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:11:57.218210 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:11:57.218215 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:11:57.218221 | orchestrator | 2026-03-19 01:11:57.218226 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-19 01:11:57.218237 | orchestrator | Thursday 19 March 2026 01:08:01 +0000 (0:00:01.795) 0:04:06.648 ******** 2026-03-19 01:11:57.218244 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218251 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218279 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218355 | orchestrator | 2026-03-19 01:11:57.218361 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-19 01:11:57.218367 | orchestrator | Thursday 19 March 2026 01:08:03 +0000 (0:00:02.055) 0:04:08.703 ******** 2026-03-19 01:11:57.218375 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:11:57.218382 | orchestrator | 2026-03-19 01:11:57.218387 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-19 01:11:57.218393 | orchestrator | Thursday 19 March 2026 01:08:04 +0000 (0:00:01.139) 0:04:09.843 ******** 2026-03-19 01:11:57.218403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218412 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218418 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218438 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218456 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218502 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218508 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.218514 | orchestrator | 2026-03-19 01:11:57.218519 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-19 01:11:57.218525 | orchestrator | Thursday 19 March 2026 01:08:07 +0000 (0:00:03.273) 0:04:13.116 ******** 2026-03-19 01:11:57.218531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 01:11:57.218536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 01:11:57.218545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.218556 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.218562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 01:11:57.218568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 01:11:57.218573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.218579 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.218585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 01:11:57.218593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 01:11:57.218602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.218612 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.218618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 01:11:57.218624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.218629 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.218635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 01:11:57.218640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.218646 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.218651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 01:11:57.218660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.218850 | orchestrator | skipping2026-03-19 01:11:57 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:11:57.218868 | orchestrator | 2026-03-19 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:11:57.218873 | orchestrator | : [testbed-node-2] 2026-03-19 01:11:57.218879 | orchestrator | 2026-03-19 01:11:57.218885 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-19 01:11:57.218890 | orchestrator | Thursday 19 March 2026 01:08:09 +0000 (0:00:01.619) 0:04:14.736 ******** 2026-03-19 01:11:57.218896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 01:11:57.218902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 01:11:57.218907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.218913 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.218918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 01:11:57.218937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 01:11:57.218943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.218948 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.218953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 01:11:57.218958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 01:11:57.218963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.218973 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.218981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 01:11:57.218989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.218995 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.219000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 01:11:57.219005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.219011 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.219016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 01:11:57.219021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.219026 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.219032 | orchestrator | 2026-03-19 01:11:57.219037 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-19 01:11:57.219046 | orchestrator | Thursday 19 March 2026 01:08:11 +0000 (0:00:02.026) 0:04:16.762 ******** 2026-03-19 01:11:57.219050 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.219055 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.219060 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.219065 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 01:11:57.219071 | orchestrator | 2026-03-19 01:11:57.219076 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-19 01:11:57.219081 | orchestrator | Thursday 19 March 2026 01:08:12 +0000 (0:00:00.973) 0:04:17.736 ******** 2026-03-19 01:11:57.219086 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 01:11:57.219091 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 01:11:57.219096 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 01:11:57.219101 | orchestrator | 2026-03-19 01:11:57.219105 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-19 01:11:57.219110 | orchestrator | Thursday 19 March 2026 01:08:13 +0000 (0:00:00.977) 0:04:18.713 ******** 2026-03-19 01:11:57.219118 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 01:11:57.219123 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 01:11:57.219128 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 01:11:57.219133 | orchestrator | 2026-03-19 01:11:57.219138 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-19 01:11:57.219144 | orchestrator | Thursday 19 March 2026 01:08:14 +0000 (0:00:01.096) 0:04:19.810 ******** 2026-03-19 01:11:57.219149 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:11:57.219154 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:11:57.219159 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:11:57.219164 | orchestrator | 2026-03-19 01:11:57.219170 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-19 01:11:57.219175 | orchestrator | Thursday 19 March 2026 01:08:14 +0000 (0:00:00.484) 0:04:20.295 ******** 2026-03-19 01:11:57.219183 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:11:57.219188 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:11:57.219193 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:11:57.219198 | orchestrator | 2026-03-19 01:11:57.219203 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-19 01:11:57.219208 | orchestrator | Thursday 19 March 2026 01:08:15 +0000 (0:00:00.472) 0:04:20.768 ******** 2026-03-19 01:11:57.219213 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-19 01:11:57.219218 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-19 01:11:57.219223 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-19 01:11:57.219229 | orchestrator | 2026-03-19 01:11:57.219233 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-19 01:11:57.219239 | orchestrator | Thursday 19 March 2026 01:08:16 +0000 (0:00:01.058) 0:04:21.826 ******** 2026-03-19 01:11:57.219243 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-19 01:11:57.219249 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-19 01:11:57.219254 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-19 01:11:57.219259 | orchestrator | 2026-03-19 01:11:57.219264 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-19 01:11:57.219269 | orchestrator | Thursday 19 March 2026 01:08:17 +0000 (0:00:01.255) 0:04:23.081 ******** 2026-03-19 01:11:57.219274 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-19 01:11:57.219278 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-19 01:11:57.219283 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-19 01:11:57.219288 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-19 01:11:57.219293 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-19 01:11:57.219298 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-19 01:11:57.219306 | orchestrator | 2026-03-19 01:11:57.219311 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-19 01:11:57.219316 | orchestrator | Thursday 19 March 2026 01:08:21 +0000 (0:00:03.881) 0:04:26.963 ******** 2026-03-19 01:11:57.219321 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.219326 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.219331 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.219335 | orchestrator | 2026-03-19 01:11:57.219340 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-19 01:11:57.219345 | orchestrator | Thursday 19 March 2026 01:08:21 +0000 (0:00:00.286) 0:04:27.250 ******** 2026-03-19 01:11:57.219349 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.219354 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.219359 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.219364 | orchestrator | 2026-03-19 01:11:57.219369 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-19 01:11:57.219374 | orchestrator | Thursday 19 March 2026 01:08:22 +0000 (0:00:00.267) 0:04:27.517 ******** 2026-03-19 01:11:57.219379 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:11:57.219384 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:11:57.219389 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:11:57.219394 | orchestrator | 2026-03-19 01:11:57.219399 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-19 01:11:57.219403 | orchestrator | Thursday 19 March 2026 01:08:23 +0000 (0:00:01.624) 0:04:29.141 ******** 2026-03-19 01:11:57.219408 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-19 01:11:57.219413 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-19 01:11:57.219418 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-19 01:11:57.219423 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-19 01:11:57.219428 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-19 01:11:57.219433 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-19 01:11:57.219438 | orchestrator | 2026-03-19 01:11:57.219442 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-19 01:11:57.219447 | orchestrator | Thursday 19 March 2026 01:08:26 +0000 (0:00:02.909) 0:04:32.051 ******** 2026-03-19 01:11:57.219452 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-19 01:11:57.219457 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-19 01:11:57.219462 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-19 01:11:57.219467 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-19 01:11:57.219471 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:11:57.219481 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-19 01:11:57.219486 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:11:57.219491 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-19 01:11:57.219496 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:11:57.219500 | orchestrator | 2026-03-19 01:11:57.219505 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-19 01:11:57.219510 | orchestrator | Thursday 19 March 2026 01:08:29 +0000 (0:00:02.975) 0:04:35.026 ******** 2026-03-19 01:11:57.219514 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.219519 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.219523 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.219536 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-03-19 01:11:57.219541 | orchestrator | 2026-03-19 01:11:57.219546 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-19 01:11:57.219551 | orchestrator | Thursday 19 March 2026 01:08:31 +0000 (0:00:02.130) 0:04:37.157 ******** 2026-03-19 01:11:57.219555 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 01:11:57.219560 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 01:11:57.219565 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 01:11:57.219569 | orchestrator | 2026-03-19 01:11:57.219574 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-19 01:11:57.219579 | orchestrator | Thursday 19 March 2026 01:08:32 +0000 (0:00:00.894) 0:04:38.052 ******** 2026-03-19 01:11:57.219583 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.219588 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.219592 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.219598 | orchestrator | 2026-03-19 01:11:57.219603 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-19 01:11:57.219607 | orchestrator | Thursday 19 March 2026 01:08:33 +0000 (0:00:00.275) 0:04:38.327 ******** 2026-03-19 01:11:57.219612 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.219617 | orchestrator | 2026-03-19 01:11:57.219621 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-19 01:11:57.219626 | orchestrator | Thursday 19 March 2026 01:08:33 +0000 (0:00:00.131) 0:04:38.459 ******** 2026-03-19 01:11:57.219631 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.219635 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.219640 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.219644 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.219649 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.219654 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.219658 | orchestrator | 2026-03-19 01:11:57.219663 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-19 01:11:57.219668 | orchestrator | Thursday 19 March 2026 01:08:33 +0000 (0:00:00.695) 0:04:39.155 ******** 2026-03-19 01:11:57.219672 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 01:11:57.219677 | orchestrator | 2026-03-19 01:11:57.219681 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-19 01:11:57.219686 | orchestrator | Thursday 19 March 2026 01:08:34 +0000 (0:00:00.645) 0:04:39.800 ******** 2026-03-19 01:11:57.219690 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.219695 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.219925 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.219941 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.219946 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.219951 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.219956 | orchestrator | 2026-03-19 01:11:57.219962 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-19 01:11:57.219967 | orchestrator | Thursday 19 March 2026 01:08:34 +0000 (0:00:00.485) 0:04:40.286 ******** 2026-03-19 01:11:57.219974 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 01:11:57.219992 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220028 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220037 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220045 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220071 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220077 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220091 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220097 | orchestrator | 2026-03-19 01:11:57.220102 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-19 01:11:57.220110 | orchestrator | Thursday 19 March 2026 01:08:38 +0000 (0:00:03.287) 0:04:43.573 ******** 2026-03-19 01:11:57.220115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 01:11:57.220121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 01:11:57.220127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 01:11:57.220137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 01:11:57.220145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 01:11:57.220154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 01:11:57.220160 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220165 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.220223 | orchestrator | 2026-03-19 01:11:57.220228 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-19 01:11:57.220233 | orchestrator | Thursday 19 March 2026 01:08:43 +0000 (0:00:05.302) 0:04:48.876 ******** 2026-03-19 01:11:57.220238 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.220244 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.220249 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.220254 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.220259 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.220264 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.220269 | orchestrator | 2026-03-19 01:11:57.220274 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-19 01:11:57.220280 | orchestrator | Thursday 19 March 2026 01:08:44 +0000 (0:00:01.329) 0:04:50.206 ******** 2026-03-19 01:11:57.220285 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-19 01:11:57.220291 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-19 01:11:57.220296 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-19 01:11:57.220301 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-19 01:11:57.220306 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-19 01:11:57.220311 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-19 01:11:57.220315 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-19 01:11:57.220321 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.220326 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-19 01:11:57.220332 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.220336 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-19 01:11:57.220345 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.220349 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-19 01:11:57.220355 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-19 01:11:57.220360 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-19 01:11:57.220365 | orchestrator | 2026-03-19 01:11:57.220369 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-19 01:11:57.220374 | orchestrator | Thursday 19 March 2026 01:08:47 +0000 (0:00:03.010) 0:04:53.216 ******** 2026-03-19 01:11:57.220380 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.220388 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.220394 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.220399 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.220405 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.220410 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.220415 | orchestrator | 2026-03-19 01:11:57.220420 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-19 01:11:57.220425 | orchestrator | Thursday 19 March 2026 01:08:48 +0000 (0:00:00.622) 0:04:53.839 ******** 2026-03-19 01:11:57.220430 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-19 01:11:57.220436 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-19 01:11:57.220442 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-19 01:11:57.220450 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-19 01:11:57.220455 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-19 01:11:57.220460 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-19 01:11:57.220465 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-19 01:11:57.220470 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-19 01:11:57.220475 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-19 01:11:57.220481 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-19 01:11:57.220486 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.220491 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-19 01:11:57.220497 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.220502 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-19 01:11:57.220508 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.220513 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-19 01:11:57.220519 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-19 01:11:57.220524 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-19 01:11:57.220529 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-19 01:11:57.220535 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-19 01:11:57.220540 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-19 01:11:57.220545 | orchestrator | 2026-03-19 01:11:57.220550 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-19 01:11:57.220555 | orchestrator | Thursday 19 March 2026 01:08:53 +0000 (0:00:04.612) 0:04:58.451 ******** 2026-03-19 01:11:57.220559 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-19 01:11:57.220564 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-19 01:11:57.220569 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-19 01:11:57.220574 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-19 01:11:57.220579 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-19 01:11:57.220584 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-19 01:11:57.220588 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-19 01:11:57.220593 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-19 01:11:57.220603 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-19 01:11:57.220609 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-19 01:11:57.220613 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-19 01:11:57.220622 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-19 01:11:57.220627 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-19 01:11:57.220632 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-19 01:11:57.220641 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.220647 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-19 01:11:57.220652 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.220657 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-19 01:11:57.220661 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.220666 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-19 01:11:57.220671 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-19 01:11:57.220676 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-19 01:11:57.220681 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-19 01:11:57.220687 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-19 01:11:57.220694 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-19 01:11:57.220715 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-19 01:11:57.220720 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-19 01:11:57.220725 | orchestrator | 2026-03-19 01:11:57.220730 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-19 01:11:57.220736 | orchestrator | Thursday 19 March 2026 01:09:00 +0000 (0:00:06.960) 0:05:05.412 ******** 2026-03-19 01:11:57.220742 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.220749 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.220756 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.220763 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.220769 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.220776 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.220782 | orchestrator | 2026-03-19 01:11:57.220789 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-19 01:11:57.220796 | orchestrator | Thursday 19 March 2026 01:09:00 +0000 (0:00:00.556) 0:05:05.968 ******** 2026-03-19 01:11:57.220802 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.220809 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.220816 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.220823 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.220830 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.220837 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.220844 | orchestrator | 2026-03-19 01:11:57.220851 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-19 01:11:57.220859 | orchestrator | Thursday 19 March 2026 01:09:01 +0000 (0:00:00.757) 0:05:06.726 ******** 2026-03-19 01:11:57.220865 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.220871 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.220879 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.220886 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:11:57.220893 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:11:57.220900 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:11:57.220907 | orchestrator | 2026-03-19 01:11:57.220914 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-03-19 01:11:57.220921 | orchestrator | Thursday 19 March 2026 01:09:03 +0000 (0:00:01.749) 0:05:08.475 ******** 2026-03-19 01:11:57.220928 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.220935 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.220948 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.220956 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:11:57.220963 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:11:57.220968 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:11:57.220975 | orchestrator | 2026-03-19 01:11:57.220983 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-19 01:11:57.220989 | orchestrator | Thursday 19 March 2026 01:09:05 +0000 (0:00:01.920) 0:05:10.396 ******** 2026-03-19 01:11:57.221001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 01:11:57.221015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 01:11:57.221023 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.221030 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.221037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 01:11:57.221043 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 01:11:57.221052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.221057 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.221067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 01:11:57.221078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 01:11:57.221086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.221093 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.221101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 01:11:57.221112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.221119 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.221126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 01:11:57.221137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.221144 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.221157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 01:11:57.221165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 01:11:57.221172 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.221179 | orchestrator | 2026-03-19 01:11:57.221186 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-19 01:11:57.221193 | orchestrator | Thursday 19 March 2026 01:09:06 +0000 (0:00:01.347) 0:05:11.744 ******** 2026-03-19 01:11:57.221201 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-19 01:11:57.221208 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-19 01:11:57.221214 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.221221 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-19 01:11:57.221228 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-19 01:11:57.221237 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.221241 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-19 01:11:57.221246 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-19 01:11:57.221251 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.221256 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-19 01:11:57.221261 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-19 01:11:57.221266 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.221271 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-19 01:11:57.221276 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-19 01:11:57.221281 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.221286 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-19 01:11:57.221291 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-19 01:11:57.221296 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.221300 | orchestrator | 2026-03-19 01:11:57.221306 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-19 01:11:57.221310 | orchestrator | Thursday 19 March 2026 01:09:07 +0000 (0:00:00.797) 0:05:12.541 ******** 2026-03-19 01:11:57.221315 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 01:11:57.221325 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 01:11:57.221334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 01:11:57.221340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 01:11:57.221349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 01:11:57.221355 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 01:11:57.221360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 01:11:57.221368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 01:11:57.221378 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 01:11:57.221383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.221392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.221397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.221403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.221408 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.221420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 01:11:57.221425 | orchestrator | 2026-03-19 01:11:57.221431 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-19 01:11:57.221436 | orchestrator | Thursday 19 March 2026 01:09:09 +0000 (0:00:02.728) 0:05:15.269 ******** 2026-03-19 01:11:57.221442 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.221450 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.221455 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.221460 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.221465 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.221470 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.221475 | orchestrator | 2026-03-19 01:11:57.221480 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-19 01:11:57.221485 | orchestrator | Thursday 19 March 2026 01:09:10 +0000 (0:00:00.801) 0:05:16.071 ******** 2026-03-19 01:11:57.221490 | orchestrator | 2026-03-19 01:11:57.221495 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-19 01:11:57.221500 | orchestrator | Thursday 19 March 2026 01:09:10 +0000 (0:00:00.127) 0:05:16.198 ******** 2026-03-19 01:11:57.221505 | orchestrator | 2026-03-19 01:11:57.221509 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-19 01:11:57.221515 | orchestrator | Thursday 19 March 2026 01:09:11 +0000 (0:00:00.127) 0:05:16.325 ******** 2026-03-19 01:11:57.221520 | orchestrator | 2026-03-19 01:11:57.221525 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-19 01:11:57.221530 | orchestrator | Thursday 19 March 2026 01:09:11 +0000 (0:00:00.127) 0:05:16.452 ******** 2026-03-19 01:11:57.221535 | orchestrator | 2026-03-19 01:11:57.221540 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-19 01:11:57.221545 | orchestrator | Thursday 19 March 2026 01:09:11 +0000 (0:00:00.128) 0:05:16.580 ******** 2026-03-19 01:11:57.221550 | orchestrator | 2026-03-19 01:11:57.221554 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-19 01:11:57.221559 | orchestrator | Thursday 19 March 2026 01:09:11 +0000 (0:00:00.271) 0:05:16.852 ******** 2026-03-19 01:11:57.221564 | orchestrator | 2026-03-19 01:11:57.221569 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-19 01:11:57.221574 | orchestrator | Thursday 19 March 2026 01:09:11 +0000 (0:00:00.129) 0:05:16.981 ******** 2026-03-19 01:11:57.221578 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.221584 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:11:57.221588 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:11:57.221593 | orchestrator | 2026-03-19 01:11:57.221598 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-19 01:11:57.221602 | orchestrator | Thursday 19 March 2026 01:09:18 +0000 (0:00:06.621) 0:05:23.602 ******** 2026-03-19 01:11:57.221607 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.221612 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:11:57.221617 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:11:57.221622 | orchestrator | 2026-03-19 01:11:57.221627 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-19 01:11:57.221632 | orchestrator | Thursday 19 March 2026 01:09:29 +0000 (0:00:11.042) 0:05:34.645 ******** 2026-03-19 01:11:57.221637 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:11:57.221641 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:11:57.221646 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:11:57.221650 | orchestrator | 2026-03-19 01:11:57.221655 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-19 01:11:57.221660 | orchestrator | Thursday 19 March 2026 01:09:50 +0000 (0:00:21.149) 0:05:55.794 ******** 2026-03-19 01:11:57.221665 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:11:57.221670 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:11:57.221675 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:11:57.221679 | orchestrator | 2026-03-19 01:11:57.221684 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-19 01:11:57.221688 | orchestrator | Thursday 19 March 2026 01:10:18 +0000 (0:00:27.720) 0:06:23.514 ******** 2026-03-19 01:11:57.221694 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-03-19 01:11:57.221713 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-03-19 01:11:57.221728 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-03-19 01:11:57.221733 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:11:57.221738 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:11:57.221743 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:11:57.221748 | orchestrator | 2026-03-19 01:11:57.221752 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-19 01:11:57.221757 | orchestrator | Thursday 19 March 2026 01:10:24 +0000 (0:00:05.995) 0:06:29.510 ******** 2026-03-19 01:11:57.221762 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:11:57.221767 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:11:57.221772 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:11:57.221777 | orchestrator | 2026-03-19 01:11:57.221782 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-19 01:11:57.221790 | orchestrator | Thursday 19 March 2026 01:10:24 +0000 (0:00:00.689) 0:06:30.199 ******** 2026-03-19 01:11:57.221795 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:11:57.221800 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:11:57.221804 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:11:57.221809 | orchestrator | 2026-03-19 01:11:57.221814 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-19 01:11:57.221819 | orchestrator | Thursday 19 March 2026 01:10:44 +0000 (0:00:19.504) 0:06:49.703 ******** 2026-03-19 01:11:57.221824 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.221829 | orchestrator | 2026-03-19 01:11:57.221834 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-19 01:11:57.221845 | orchestrator | Thursday 19 March 2026 01:10:44 +0000 (0:00:00.292) 0:06:49.995 ******** 2026-03-19 01:11:57.221850 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.221855 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.221860 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.221864 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.221869 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.221874 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-19 01:11:57.221879 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-19 01:11:57.221884 | orchestrator | 2026-03-19 01:11:57.221889 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-19 01:11:57.221893 | orchestrator | Thursday 19 March 2026 01:11:06 +0000 (0:00:21.395) 0:07:11.391 ******** 2026-03-19 01:11:57.221898 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.221903 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.221908 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.221912 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.221917 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.221922 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.221927 | orchestrator | 2026-03-19 01:11:57.221932 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-19 01:11:57.221937 | orchestrator | Thursday 19 March 2026 01:11:14 +0000 (0:00:08.672) 0:07:20.064 ******** 2026-03-19 01:11:57.221942 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.221946 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.221951 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.221956 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.221960 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.221965 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-03-19 01:11:57.221970 | orchestrator | 2026-03-19 01:11:57.221975 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-19 01:11:57.221980 | orchestrator | Thursday 19 March 2026 01:11:18 +0000 (0:00:03.695) 0:07:23.759 ******** 2026-03-19 01:11:57.221984 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-19 01:11:57.221993 | orchestrator | 2026-03-19 01:11:57.221998 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-19 01:11:57.222003 | orchestrator | Thursday 19 March 2026 01:11:31 +0000 (0:00:13.387) 0:07:37.146 ******** 2026-03-19 01:11:57.222008 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-19 01:11:57.222040 | orchestrator | 2026-03-19 01:11:57.222047 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-19 01:11:57.222053 | orchestrator | Thursday 19 March 2026 01:11:33 +0000 (0:00:01.442) 0:07:38.589 ******** 2026-03-19 01:11:57.222058 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.222063 | orchestrator | 2026-03-19 01:11:57.222069 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-19 01:11:57.222074 | orchestrator | Thursday 19 March 2026 01:11:34 +0000 (0:00:01.376) 0:07:39.966 ******** 2026-03-19 01:11:57.222079 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-19 01:11:57.222084 | orchestrator | 2026-03-19 01:11:57.222090 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-19 01:11:57.222095 | orchestrator | Thursday 19 March 2026 01:11:47 +0000 (0:00:12.893) 0:07:52.860 ******** 2026-03-19 01:11:57.222100 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:11:57.222106 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:11:57.222111 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:11:57.222116 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:11:57.222122 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:11:57.222127 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:11:57.222132 | orchestrator | 2026-03-19 01:11:57.222138 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-19 01:11:57.222143 | orchestrator | 2026-03-19 01:11:57.222149 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-19 01:11:57.222154 | orchestrator | Thursday 19 March 2026 01:11:49 +0000 (0:00:01.590) 0:07:54.451 ******** 2026-03-19 01:11:57.222159 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:11:57.222164 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:11:57.222170 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:11:57.222175 | orchestrator | 2026-03-19 01:11:57.222181 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-19 01:11:57.222186 | orchestrator | 2026-03-19 01:11:57.222191 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-19 01:11:57.222196 | orchestrator | Thursday 19 March 2026 01:11:50 +0000 (0:00:01.209) 0:07:55.660 ******** 2026-03-19 01:11:57.222202 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.222207 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.222212 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.222217 | orchestrator | 2026-03-19 01:11:57.222223 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-19 01:11:57.222228 | orchestrator | 2026-03-19 01:11:57.222233 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-19 01:11:57.222239 | orchestrator | Thursday 19 March 2026 01:11:50 +0000 (0:00:00.573) 0:07:56.234 ******** 2026-03-19 01:11:57.222244 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-19 01:11:57.222254 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-19 01:11:57.222260 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-19 01:11:57.222266 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-19 01:11:57.222272 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-19 01:11:57.222278 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-19 01:11:57.222283 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:11:57.222289 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-19 01:11:57.222294 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-19 01:11:57.222300 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-19 01:11:57.222318 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-19 01:11:57.222323 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-19 01:11:57.222329 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-19 01:11:57.222334 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:11:57.222340 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-19 01:11:57.222345 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-19 01:11:57.222351 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-19 01:11:57.222356 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-19 01:11:57.222361 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-19 01:11:57.222367 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-19 01:11:57.222373 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:11:57.222378 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-19 01:11:57.222384 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-19 01:11:57.222389 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-19 01:11:57.222395 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-19 01:11:57.222400 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-19 01:11:57.222406 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-19 01:11:57.222412 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.222417 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-19 01:11:57.222423 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-19 01:11:57.222428 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-19 01:11:57.222434 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-19 01:11:57.222440 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-19 01:11:57.222445 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-19 01:11:57.222451 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.222456 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-19 01:11:57.222462 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-19 01:11:57.222467 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-19 01:11:57.222473 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-19 01:11:57.222478 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-19 01:11:57.222483 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-19 01:11:57.222489 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.222494 | orchestrator | 2026-03-19 01:11:57.222500 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-19 01:11:57.222505 | orchestrator | 2026-03-19 01:11:57.222511 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-19 01:11:57.222516 | orchestrator | Thursday 19 March 2026 01:11:52 +0000 (0:00:01.381) 0:07:57.615 ******** 2026-03-19 01:11:57.222522 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-19 01:11:57.222527 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-19 01:11:57.222533 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.222538 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-19 01:11:57.222544 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-19 01:11:57.222549 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.222554 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-19 01:11:57.222560 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-19 01:11:57.222565 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.222575 | orchestrator | 2026-03-19 01:11:57.222581 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-19 01:11:57.222586 | orchestrator | 2026-03-19 01:11:57.222592 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-19 01:11:57.222597 | orchestrator | Thursday 19 March 2026 01:11:53 +0000 (0:00:00.696) 0:07:58.311 ******** 2026-03-19 01:11:57.222603 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.222608 | orchestrator | 2026-03-19 01:11:57.222613 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-19 01:11:57.222618 | orchestrator | 2026-03-19 01:11:57.222623 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-19 01:11:57.222628 | orchestrator | Thursday 19 March 2026 01:11:53 +0000 (0:00:00.721) 0:07:59.033 ******** 2026-03-19 01:11:57.222634 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:11:57.222639 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:11:57.222645 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:11:57.222650 | orchestrator | 2026-03-19 01:11:57.222655 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:11:57.222661 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:11:57.222672 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-03-19 01:11:57.222678 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-03-19 01:11:57.222684 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-03-19 01:11:57.222695 | orchestrator | testbed-node-3 : ok=41  changed=28  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-19 01:11:57.222733 | orchestrator | testbed-node-4 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-19 01:11:57.222739 | orchestrator | testbed-node-5 : ok=45  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-19 01:11:57.222744 | orchestrator | 2026-03-19 01:11:57.222749 | orchestrator | 2026-03-19 01:11:57.222754 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:11:57.222759 | orchestrator | Thursday 19 March 2026 01:11:54 +0000 (0:00:00.554) 0:07:59.587 ******** 2026-03-19 01:11:57.222763 | orchestrator | =============================================================================== 2026-03-19 01:11:57.222768 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.02s 2026-03-19 01:11:57.222773 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 27.72s 2026-03-19 01:11:57.222778 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.40s 2026-03-19 01:11:57.222783 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.15s 2026-03-19 01:11:57.222788 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.21s 2026-03-19 01:11:57.222793 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.98s 2026-03-19 01:11:57.222798 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 19.50s 2026-03-19 01:11:57.222803 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.69s 2026-03-19 01:11:57.222808 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.37s 2026-03-19 01:11:57.222813 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.39s 2026-03-19 01:11:57.222818 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.89s 2026-03-19 01:11:57.222828 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.47s 2026-03-19 01:11:57.222834 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.19s 2026-03-19 01:11:57.222840 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.60s 2026-03-19 01:11:57.222846 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.04s 2026-03-19 01:11:57.222852 | orchestrator | nova : Restart nova-api container --------------------------------------- 9.89s 2026-03-19 01:11:57.222859 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.67s 2026-03-19 01:11:57.222865 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.47s 2026-03-19 01:11:57.222872 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.31s 2026-03-19 01:11:57.222878 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 6.96s 2026-03-19 01:12:00.260865 | orchestrator | 2026-03-19 01:12:00 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:12:00.260930 | orchestrator | 2026-03-19 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:12:03.316829 | orchestrator | 2026-03-19 01:12:03 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:12:03.316884 | orchestrator | 2026-03-19 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:12:06.362902 | orchestrator | 2026-03-19 01:12:06 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:12:06.362956 | orchestrator | 2026-03-19 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:12:09.401113 | orchestrator | 2026-03-19 01:12:09 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:12:09.401179 | orchestrator | 2026-03-19 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:12:12.437414 | orchestrator | 2026-03-19 01:12:12 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:12:12.437497 | orchestrator | 2026-03-19 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:12:15.471671 | orchestrator | 2026-03-19 01:12:15 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:12:15.471767 | orchestrator | 2026-03-19 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:12:18.514195 | orchestrator | 2026-03-19 01:12:18 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:12:18.514266 | orchestrator | 2026-03-19 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:12:21.563427 | orchestrator | 2026-03-19 01:12:21 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state STARTED 2026-03-19 01:12:21.563488 | orchestrator | 2026-03-19 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-03-19 01:12:24.608846 | orchestrator | 2026-03-19 01:12:24.608900 | orchestrator | 2026-03-19 01:12:24 | INFO  | Task 0dd03a23-8653-4723-aa10-654d959747aa is in state SUCCESS 2026-03-19 01:12:24.610336 | orchestrator | 2026-03-19 01:12:24.610397 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:12:24.610407 | orchestrator | 2026-03-19 01:12:24.610413 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 01:12:24.610473 | orchestrator | Thursday 19 March 2026 01:07:52 +0000 (0:00:00.307) 0:00:00.307 ******** 2026-03-19 01:12:24.610506 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:12:24.610513 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:12:24.610525 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:12:24.610531 | orchestrator | 2026-03-19 01:12:24.610537 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 01:12:24.610543 | orchestrator | Thursday 19 March 2026 01:07:52 +0000 (0:00:00.277) 0:00:00.585 ******** 2026-03-19 01:12:24.610560 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-19 01:12:24.610564 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-19 01:12:24.610567 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-19 01:12:24.610571 | orchestrator | 2026-03-19 01:12:24.610574 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-19 01:12:24.610578 | orchestrator | 2026-03-19 01:12:24.610581 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-19 01:12:24.610585 | orchestrator | Thursday 19 March 2026 01:07:53 +0000 (0:00:00.331) 0:00:00.916 ******** 2026-03-19 01:12:24.610588 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:12:24.610592 | orchestrator | 2026-03-19 01:12:24.610595 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-19 01:12:24.610599 | orchestrator | Thursday 19 March 2026 01:07:53 +0000 (0:00:00.594) 0:00:01.511 ******** 2026-03-19 01:12:24.610602 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-19 01:12:24.610614 | orchestrator | 2026-03-19 01:12:24.610617 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-19 01:12:24.610620 | orchestrator | Thursday 19 March 2026 01:07:57 +0000 (0:00:03.453) 0:00:04.965 ******** 2026-03-19 01:12:24.610624 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-19 01:12:24.610633 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-19 01:12:24.610636 | orchestrator | 2026-03-19 01:12:24.610639 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-19 01:12:24.610643 | orchestrator | Thursday 19 March 2026 01:08:02 +0000 (0:00:05.627) 0:00:10.593 ******** 2026-03-19 01:12:24.610646 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 01:12:24.610649 | orchestrator | 2026-03-19 01:12:24.610653 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-19 01:12:24.610656 | orchestrator | Thursday 19 March 2026 01:08:05 +0000 (0:00:02.781) 0:00:13.374 ******** 2026-03-19 01:12:24.610660 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-19 01:12:24.610663 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-19 01:12:24.610666 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 01:12:24.610670 | orchestrator | 2026-03-19 01:12:24.610673 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-19 01:12:24.610677 | orchestrator | Thursday 19 March 2026 01:08:12 +0000 (0:00:07.181) 0:00:20.555 ******** 2026-03-19 01:12:24.610680 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 01:12:24.610683 | orchestrator | 2026-03-19 01:12:24.610687 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-19 01:12:24.610690 | orchestrator | Thursday 19 March 2026 01:08:15 +0000 (0:00:02.849) 0:00:23.405 ******** 2026-03-19 01:12:24.610693 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-19 01:12:24.610696 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-19 01:12:24.610700 | orchestrator | 2026-03-19 01:12:24.610703 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-19 01:12:24.610706 | orchestrator | Thursday 19 March 2026 01:08:23 +0000 (0:00:07.887) 0:00:31.292 ******** 2026-03-19 01:12:24.610743 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-19 01:12:24.610749 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-19 01:12:24.610755 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-19 01:12:24.610763 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-19 01:12:24.610769 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-19 01:12:24.610781 | orchestrator | 2026-03-19 01:12:24.610786 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-19 01:12:24.610791 | orchestrator | Thursday 19 March 2026 01:08:37 +0000 (0:00:13.966) 0:00:45.259 ******** 2026-03-19 01:12:24.610805 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:12:24.610810 | orchestrator | 2026-03-19 01:12:24.610816 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-19 01:12:24.610821 | orchestrator | Thursday 19 March 2026 01:08:38 +0000 (0:00:00.608) 0:00:45.867 ******** 2026-03-19 01:12:24.610826 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.610831 | orchestrator | 2026-03-19 01:12:24.610836 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-19 01:12:24.610841 | orchestrator | Thursday 19 March 2026 01:08:42 +0000 (0:00:04.425) 0:00:50.293 ******** 2026-03-19 01:12:24.610854 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.610860 | orchestrator | 2026-03-19 01:12:24.610878 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-19 01:12:24.610905 | orchestrator | Thursday 19 March 2026 01:08:47 +0000 (0:00:04.437) 0:00:54.731 ******** 2026-03-19 01:12:24.610910 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:12:24.610913 | orchestrator | 2026-03-19 01:12:24.610916 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-19 01:12:24.610920 | orchestrator | Thursday 19 March 2026 01:08:49 +0000 (0:00:02.898) 0:00:57.630 ******** 2026-03-19 01:12:24.610923 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-19 01:12:24.610926 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-19 01:12:24.610930 | orchestrator | 2026-03-19 01:12:24.610933 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-19 01:12:24.610937 | orchestrator | Thursday 19 March 2026 01:09:01 +0000 (0:00:11.084) 0:01:08.714 ******** 2026-03-19 01:12:24.610941 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-19 01:12:24.610945 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-19 01:12:24.610959 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-19 01:12:24.610964 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-19 01:12:24.610968 | orchestrator | 2026-03-19 01:12:24.610972 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-19 01:12:24.610976 | orchestrator | Thursday 19 March 2026 01:09:16 +0000 (0:00:15.441) 0:01:24.155 ******** 2026-03-19 01:12:24.610982 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.610989 | orchestrator | 2026-03-19 01:12:24.610997 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-19 01:12:24.611002 | orchestrator | Thursday 19 March 2026 01:09:20 +0000 (0:00:04.291) 0:01:28.447 ******** 2026-03-19 01:12:24.611008 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.611014 | orchestrator | 2026-03-19 01:12:24.611020 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-19 01:12:24.611026 | orchestrator | Thursday 19 March 2026 01:09:25 +0000 (0:00:04.827) 0:01:33.275 ******** 2026-03-19 01:12:24.611031 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:12:24.611037 | orchestrator | 2026-03-19 01:12:24.611043 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-19 01:12:24.611048 | orchestrator | Thursday 19 March 2026 01:09:25 +0000 (0:00:00.200) 0:01:33.475 ******** 2026-03-19 01:12:24.611054 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:12:24.611061 | orchestrator | 2026-03-19 01:12:24.611067 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-19 01:12:24.611079 | orchestrator | Thursday 19 March 2026 01:09:30 +0000 (0:00:04.363) 0:01:37.839 ******** 2026-03-19 01:12:24.611085 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-03-19 01:12:24.611092 | orchestrator | 2026-03-19 01:12:24.611097 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-19 01:12:24.611103 | orchestrator | Thursday 19 March 2026 01:09:31 +0000 (0:00:01.065) 0:01:38.905 ******** 2026-03-19 01:12:24.611108 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:12:24.611114 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.611120 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:12:24.611125 | orchestrator | 2026-03-19 01:12:24.611131 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-19 01:12:24.611137 | orchestrator | Thursday 19 March 2026 01:09:36 +0000 (0:00:05.402) 0:01:44.307 ******** 2026-03-19 01:12:24.611142 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:12:24.611157 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.611163 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:12:24.611169 | orchestrator | 2026-03-19 01:12:24.611175 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-19 01:12:24.611179 | orchestrator | Thursday 19 March 2026 01:09:41 +0000 (0:00:04.837) 0:01:49.145 ******** 2026-03-19 01:12:24.611182 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.611186 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:12:24.611190 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:12:24.611194 | orchestrator | 2026-03-19 01:12:24.611198 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-19 01:12:24.611202 | orchestrator | Thursday 19 March 2026 01:09:42 +0000 (0:00:00.662) 0:01:49.807 ******** 2026-03-19 01:12:24.611205 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:12:24.611209 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:12:24.611213 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:12:24.611216 | orchestrator | 2026-03-19 01:12:24.611221 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-19 01:12:24.611226 | orchestrator | Thursday 19 March 2026 01:09:43 +0000 (0:00:01.448) 0:01:51.256 ******** 2026-03-19 01:12:24.611232 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.611237 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:12:24.611242 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:12:24.611247 | orchestrator | 2026-03-19 01:12:24.611256 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-19 01:12:24.611261 | orchestrator | Thursday 19 March 2026 01:09:44 +0000 (0:00:01.041) 0:01:52.298 ******** 2026-03-19 01:12:24.611267 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.611271 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:12:24.611277 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:12:24.611281 | orchestrator | 2026-03-19 01:12:24.611286 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-19 01:12:24.611291 | orchestrator | Thursday 19 March 2026 01:09:45 +0000 (0:00:00.974) 0:01:53.272 ******** 2026-03-19 01:12:24.611297 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:12:24.611302 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:12:24.611307 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.611311 | orchestrator | 2026-03-19 01:12:24.611322 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-19 01:12:24.611327 | orchestrator | Thursday 19 March 2026 01:09:47 +0000 (0:00:01.807) 0:01:55.080 ******** 2026-03-19 01:12:24.611332 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.611338 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:12:24.611343 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:12:24.611348 | orchestrator | 2026-03-19 01:12:24.611353 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-19 01:12:24.611358 | orchestrator | Thursday 19 March 2026 01:09:48 +0000 (0:00:01.339) 0:01:56.420 ******** 2026-03-19 01:12:24.611367 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:12:24.611372 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:12:24.611377 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:12:24.611382 | orchestrator | 2026-03-19 01:12:24.611387 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-19 01:12:24.611393 | orchestrator | Thursday 19 March 2026 01:09:49 +0000 (0:00:00.592) 0:01:57.012 ******** 2026-03-19 01:12:24.611398 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:12:24.611402 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:12:24.611408 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:12:24.611413 | orchestrator | 2026-03-19 01:12:24.611418 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-19 01:12:24.611432 | orchestrator | Thursday 19 March 2026 01:09:51 +0000 (0:00:02.386) 0:01:59.398 ******** 2026-03-19 01:12:24.611438 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:12:24.611443 | orchestrator | 2026-03-19 01:12:24.611448 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-19 01:12:24.611453 | orchestrator | Thursday 19 March 2026 01:09:52 +0000 (0:00:00.761) 0:02:00.159 ******** 2026-03-19 01:12:24.611458 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:12:24.611464 | orchestrator | 2026-03-19 01:12:24.611469 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-19 01:12:24.611476 | orchestrator | Thursday 19 March 2026 01:09:56 +0000 (0:00:03.867) 0:02:04.027 ******** 2026-03-19 01:12:24.611482 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:12:24.611487 | orchestrator | 2026-03-19 01:12:24.611493 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-19 01:12:24.611498 | orchestrator | Thursday 19 March 2026 01:09:59 +0000 (0:00:03.296) 0:02:07.323 ******** 2026-03-19 01:12:24.611503 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-19 01:12:24.611509 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-19 01:12:24.611515 | orchestrator | 2026-03-19 01:12:24.611520 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-19 01:12:24.611526 | orchestrator | Thursday 19 March 2026 01:10:05 +0000 (0:00:06.107) 0:02:13.431 ******** 2026-03-19 01:12:24.611531 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:12:24.611537 | orchestrator | 2026-03-19 01:12:24.611543 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-19 01:12:24.611548 | orchestrator | Thursday 19 March 2026 01:10:08 +0000 (0:00:03.162) 0:02:16.593 ******** 2026-03-19 01:12:24.611555 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:12:24.611559 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:12:24.611562 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:12:24.611565 | orchestrator | 2026-03-19 01:12:24.611569 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-19 01:12:24.611572 | orchestrator | Thursday 19 March 2026 01:10:09 +0000 (0:00:00.284) 0:02:16.878 ******** 2026-03-19 01:12:24.611577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 01:12:24.611594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 01:12:24.611599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 01:12:24.611603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 01:12:24.611607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 01:12:24.611611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 01:12:24.611615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.611624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.611631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.611635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.611639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.611642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.611646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:12:24.611650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:12:24.611658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:12:24.611662 | orchestrator | 2026-03-19 01:12:24.611665 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-19 01:12:24.611669 | orchestrator | Thursday 19 March 2026 01:10:12 +0000 (0:00:03.091) 0:02:19.970 ******** 2026-03-19 01:12:24.611672 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:12:24.611676 | orchestrator | 2026-03-19 01:12:24.611681 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-19 01:12:24.611685 | orchestrator | Thursday 19 March 2026 01:10:12 +0000 (0:00:00.118) 0:02:20.088 ******** 2026-03-19 01:12:24.611688 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:12:24.611691 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:12:24.611695 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:12:24.611698 | orchestrator | 2026-03-19 01:12:24.611701 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-19 01:12:24.611705 | orchestrator | Thursday 19 March 2026 01:10:12 +0000 (0:00:00.275) 0:02:20.364 ******** 2026-03-19 01:12:24.611708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 01:12:24.611727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 01:12:24.611731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.611737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.611743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:12:24.611746 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:12:24.611757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 01:12:24.611761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 01:12:24.611765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.611769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.611777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:12:24.611781 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:12:24.611787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 01:12:24.611793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 01:12:24.611797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.611801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.611804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:12:24.611812 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:12:24.611816 | orchestrator | 2026-03-19 01:12:24.611819 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-19 01:12:24.611823 | orchestrator | Thursday 19 March 2026 01:10:13 +0000 (0:00:00.661) 0:02:21.025 ******** 2026-03-19 01:12:24.611826 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:12:24.611829 | orchestrator | 2026-03-19 01:12:24.611839 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-19 01:12:24.611843 | orchestrator | Thursday 19 March 2026 01:10:14 +0000 (0:00:00.692) 0:02:21.717 ******** 2026-03-19 01:12:24.611846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 01:12:24.612023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 01:12:24.612037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 01:12:24.612043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 01:12:24.612056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 01:12:24.612062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 01:12:24.612068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612127 | orchestrator | 2026-03-19 01:12:24.612131 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-19 01:12:24.612135 | orchestrator | Thursday 19 March 2026 01:10:18 +0000 (0:00:04.391) 0:02:26.109 ******** 2026-03-19 01:12:24.612138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 01:12:24.612142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 01:12:24.612148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.612151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.612155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:12:24.612158 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:12:24.612167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 01:12:24.612171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 01:12:24.612174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.612183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.612191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:12:24.612198 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:12:24.612206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 01:12:24.612212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 01:12:24.612221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.612226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.612237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:12:24.612243 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:12:24.612249 | orchestrator | 2026-03-19 01:12:24.612255 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-19 01:12:24.612261 | orchestrator | Thursday 19 March 2026 01:10:19 +0000 (0:00:00.605) 0:02:26.715 ******** 2026-03-19 01:12:24.612267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 01:12:24.612272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 01:12:24.612279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.612285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.612292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:12:24.612296 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:12:24.612299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 01:12:24.612303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 01:12:24.612306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.612313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.612320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:12:24.612326 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:12:24.612329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 01:12:24.612333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 01:12:24.612336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.612340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 01:12:24.612346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 01:12:24.612349 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:12:24.612353 | orchestrator | 2026-03-19 01:12:24.612356 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-19 01:12:24.612359 | orchestrator | Thursday 19 March 2026 01:10:19 +0000 (0:00:00.875) 0:02:27.591 ******** 2026-03-19 01:12:24.612366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 01:12:24.612372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 01:12:24.612375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 01:12:24.612379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 01:12:24.612384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 01:12:24.612388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 01:12:24.612812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612870 | orchestrator | 2026-03-19 01:12:24.612874 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-19 01:12:24.612877 | orchestrator | Thursday 19 March 2026 01:10:24 +0000 (0:00:04.267) 0:02:31.858 ******** 2026-03-19 01:12:24.612881 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-19 01:12:24.612885 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-19 01:12:24.612888 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-19 01:12:24.612891 | orchestrator | 2026-03-19 01:12:24.612895 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-19 01:12:24.612898 | orchestrator | Thursday 19 March 2026 01:10:25 +0000 (0:00:01.385) 0:02:33.244 ******** 2026-03-19 01:12:24.612902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 01:12:24.612907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 01:12:24.612916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 01:12:24.612920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 01:12:24.612924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 01:12:24.612927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 01:12:24.612931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:12:24.612987 | orchestrator | 2026-03-19 01:12:24.612993 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-19 01:12:24.612996 | orchestrator | Thursday 19 March 2026 01:10:43 +0000 (0:00:17.804) 0:02:51.048 ******** 2026-03-19 01:12:24.613000 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.613003 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:12:24.613006 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:12:24.613010 | orchestrator | 2026-03-19 01:12:24.613013 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-19 01:12:24.613016 | orchestrator | Thursday 19 March 2026 01:10:45 +0000 (0:00:01.718) 0:02:52.767 ******** 2026-03-19 01:12:24.613020 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-19 01:12:24.613023 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-19 01:12:24.613028 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-19 01:12:24.613032 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-19 01:12:24.613035 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-19 01:12:24.613039 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-19 01:12:24.613042 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-19 01:12:24.613045 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-19 01:12:24.613048 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-19 01:12:24.613052 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-19 01:12:24.613055 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-19 01:12:24.613058 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-19 01:12:24.613062 | orchestrator | 2026-03-19 01:12:24.613065 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-19 01:12:24.613068 | orchestrator | Thursday 19 March 2026 01:10:50 +0000 (0:00:05.821) 0:02:58.588 ******** 2026-03-19 01:12:24.613072 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-19 01:12:24.613075 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-19 01:12:24.613078 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-19 01:12:24.613082 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-19 01:12:24.613085 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-19 01:12:24.613089 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-19 01:12:24.613092 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-19 01:12:24.613096 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-19 01:12:24.613099 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-19 01:12:24.613102 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-19 01:12:24.613106 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-19 01:12:24.613109 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-19 01:12:24.613112 | orchestrator | 2026-03-19 01:12:24.613116 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-19 01:12:24.613119 | orchestrator | Thursday 19 March 2026 01:10:55 +0000 (0:00:04.653) 0:03:03.242 ******** 2026-03-19 01:12:24.613122 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-19 01:12:24.613128 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-19 01:12:24.613131 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-19 01:12:24.613135 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-19 01:12:24.613138 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-19 01:12:24.613141 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-19 01:12:24.613145 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-19 01:12:24.613148 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-19 01:12:24.613151 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-19 01:12:24.613155 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-19 01:12:24.613192 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-19 01:12:24.613197 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-19 01:12:24.613200 | orchestrator | 2026-03-19 01:12:24.613203 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-19 01:12:24.613207 | orchestrator | Thursday 19 March 2026 01:11:00 +0000 (0:00:04.577) 0:03:07.820 ******** 2026-03-19 01:12:24.613214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 01:12:24.613221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 01:12:24.613225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 01:12:24.613233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 01:12:24.613236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 01:12:24.613240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 01:12:24.613246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.613251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.613255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.613258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.613264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.613268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 01:12:24.613271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:12:24.613278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:12:24.613287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 01:12:24.613293 | orchestrator | 2026-03-19 01:12:24.613299 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-19 01:12:24.613305 | orchestrator | Thursday 19 March 2026 01:11:03 +0000 (0:00:03.288) 0:03:11.108 ******** 2026-03-19 01:12:24.613310 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:12:24.613316 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:12:24.613321 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:12:24.613326 | orchestrator | 2026-03-19 01:12:24.613333 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-19 01:12:24.613338 | orchestrator | Thursday 19 March 2026 01:11:03 +0000 (0:00:00.443) 0:03:11.552 ******** 2026-03-19 01:12:24.613343 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.613349 | orchestrator | 2026-03-19 01:12:24.613354 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-19 01:12:24.613364 | orchestrator | Thursday 19 March 2026 01:11:05 +0000 (0:00:01.914) 0:03:13.466 ******** 2026-03-19 01:12:24.613370 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.613376 | orchestrator | 2026-03-19 01:12:24.613381 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-19 01:12:24.613387 | orchestrator | Thursday 19 March 2026 01:11:07 +0000 (0:00:01.886) 0:03:15.353 ******** 2026-03-19 01:12:24.613393 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.613399 | orchestrator | 2026-03-19 01:12:24.613405 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-19 01:12:24.613411 | orchestrator | Thursday 19 March 2026 01:11:09 +0000 (0:00:02.157) 0:03:17.510 ******** 2026-03-19 01:12:24.613417 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.613422 | orchestrator | 2026-03-19 01:12:24.613428 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-19 01:12:24.613434 | orchestrator | Thursday 19 March 2026 01:11:11 +0000 (0:00:02.135) 0:03:19.645 ******** 2026-03-19 01:12:24.613440 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.613446 | orchestrator | 2026-03-19 01:12:24.613453 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-19 01:12:24.613459 | orchestrator | Thursday 19 March 2026 01:11:31 +0000 (0:00:19.222) 0:03:38.868 ******** 2026-03-19 01:12:24.613465 | orchestrator | 2026-03-19 01:12:24.613472 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-19 01:12:24.613479 | orchestrator | Thursday 19 March 2026 01:11:31 +0000 (0:00:00.068) 0:03:38.936 ******** 2026-03-19 01:12:24.613485 | orchestrator | 2026-03-19 01:12:24.613491 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-19 01:12:24.613497 | orchestrator | Thursday 19 March 2026 01:11:31 +0000 (0:00:00.068) 0:03:39.004 ******** 2026-03-19 01:12:24.613503 | orchestrator | 2026-03-19 01:12:24.613508 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-19 01:12:24.613514 | orchestrator | Thursday 19 March 2026 01:11:31 +0000 (0:00:00.067) 0:03:39.072 ******** 2026-03-19 01:12:24.613519 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.613525 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:12:24.613531 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:12:24.613537 | orchestrator | 2026-03-19 01:12:24.613543 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-19 01:12:24.613548 | orchestrator | Thursday 19 March 2026 01:11:49 +0000 (0:00:17.823) 0:03:56.895 ******** 2026-03-19 01:12:24.613557 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.613563 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:12:24.613569 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:12:24.613575 | orchestrator | 2026-03-19 01:12:24.613582 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-19 01:12:24.613588 | orchestrator | Thursday 19 March 2026 01:12:00 +0000 (0:00:11.304) 0:04:08.200 ******** 2026-03-19 01:12:24.613594 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.613601 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:12:24.613607 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:12:24.613612 | orchestrator | 2026-03-19 01:12:24.613619 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-19 01:12:24.613626 | orchestrator | Thursday 19 March 2026 01:12:06 +0000 (0:00:05.676) 0:04:13.877 ******** 2026-03-19 01:12:24.613632 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.613639 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:12:24.613645 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:12:24.613651 | orchestrator | 2026-03-19 01:12:24.613657 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-19 01:12:24.613663 | orchestrator | Thursday 19 March 2026 01:12:11 +0000 (0:00:05.593) 0:04:19.470 ******** 2026-03-19 01:12:24.613669 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:12:24.613675 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:12:24.613687 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:12:24.613690 | orchestrator | 2026-03-19 01:12:24.613694 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:12:24.613700 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 01:12:24.613704 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 01:12:24.613708 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 01:12:24.613761 | orchestrator | 2026-03-19 01:12:24.613765 | orchestrator | 2026-03-19 01:12:24.613768 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:12:24.613772 | orchestrator | Thursday 19 March 2026 01:12:21 +0000 (0:00:10.148) 0:04:29.619 ******** 2026-03-19 01:12:24.613780 | orchestrator | =============================================================================== 2026-03-19 01:12:24.613784 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.22s 2026-03-19 01:12:24.613787 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.82s 2026-03-19 01:12:24.613790 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.80s 2026-03-19 01:12:24.613794 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.44s 2026-03-19 01:12:24.613797 | orchestrator | octavia : Adding octavia related roles --------------------------------- 13.97s 2026-03-19 01:12:24.613800 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.30s 2026-03-19 01:12:24.613804 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.08s 2026-03-19 01:12:24.613807 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.15s 2026-03-19 01:12:24.613810 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.89s 2026-03-19 01:12:24.613814 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.18s 2026-03-19 01:12:24.613817 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.11s 2026-03-19 01:12:24.613820 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.82s 2026-03-19 01:12:24.613824 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.68s 2026-03-19 01:12:24.613827 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.63s 2026-03-19 01:12:24.613830 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.59s 2026-03-19 01:12:24.613834 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.40s 2026-03-19 01:12:24.613837 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 4.84s 2026-03-19 01:12:24.613840 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 4.83s 2026-03-19 01:12:24.613844 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 4.65s 2026-03-19 01:12:24.613847 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 4.58s 2026-03-19 01:12:24.613851 | orchestrator | 2026-03-19 01:12:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:12:27.648316 | orchestrator | 2026-03-19 01:12:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:12:30.689549 | orchestrator | 2026-03-19 01:12:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:12:33.730515 | orchestrator | 2026-03-19 01:12:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:12:36.771334 | orchestrator | 2026-03-19 01:12:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:12:39.811192 | orchestrator | 2026-03-19 01:12:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:12:42.854730 | orchestrator | 2026-03-19 01:12:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:12:45.897653 | orchestrator | 2026-03-19 01:12:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:12:48.941821 | orchestrator | 2026-03-19 01:12:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:12:51.988714 | orchestrator | 2026-03-19 01:12:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:12:55.026282 | orchestrator | 2026-03-19 01:12:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:12:58.067611 | orchestrator | 2026-03-19 01:12:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:13:01.109994 | orchestrator | 2026-03-19 01:13:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:13:04.149240 | orchestrator | 2026-03-19 01:13:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:13:07.191187 | orchestrator | 2026-03-19 01:13:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:13:10.234531 | orchestrator | 2026-03-19 01:13:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:13:13.279525 | orchestrator | 2026-03-19 01:13:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:13:16.329234 | orchestrator | 2026-03-19 01:13:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:13:19.369497 | orchestrator | 2026-03-19 01:13:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:13:22.421975 | orchestrator | 2026-03-19 01:13:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-19 01:13:25.461018 | orchestrator | 2026-03-19 01:13:25.643403 | orchestrator | 2026-03-19 01:13:25.649539 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Mar 19 01:13:25 UTC 2026 2026-03-19 01:13:25.649826 | orchestrator | 2026-03-19 01:13:26.047528 | orchestrator | ok: Runtime: 0:32:48.490029 2026-03-19 01:13:26.297028 | 2026-03-19 01:13:26.297193 | TASK [Bootstrap services] 2026-03-19 01:13:27.140535 | orchestrator | 2026-03-19 01:13:27.140671 | orchestrator | # BOOTSTRAP 2026-03-19 01:13:27.140681 | orchestrator | 2026-03-19 01:13:27.140687 | orchestrator | + set -e 2026-03-19 01:13:27.140691 | orchestrator | + echo 2026-03-19 01:13:27.140697 | orchestrator | + echo '# BOOTSTRAP' 2026-03-19 01:13:27.140704 | orchestrator | + echo 2026-03-19 01:13:27.140748 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-19 01:13:27.147822 | orchestrator | + set -e 2026-03-19 01:13:27.147911 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-19 01:13:31.625955 | orchestrator | 2026-03-19 01:13:31 | INFO  | It takes a moment until task bf2482a9-2117-4994-9546-46ed30037d53 (flavor-manager) has been started and output is visible here. 2026-03-19 01:13:40.546538 | orchestrator | 2026-03-19 01:13:35 | INFO  | Flavor SCS-1L-1 created 2026-03-19 01:13:40.546695 | orchestrator | 2026-03-19 01:13:35 | INFO  | Flavor SCS-1L-1-5 created 2026-03-19 01:13:40.546712 | orchestrator | 2026-03-19 01:13:36 | INFO  | Flavor SCS-1V-2 created 2026-03-19 01:13:40.546717 | orchestrator | 2026-03-19 01:13:36 | INFO  | Flavor SCS-1V-2-5 created 2026-03-19 01:13:40.546721 | orchestrator | 2026-03-19 01:13:36 | INFO  | Flavor SCS-1V-4 created 2026-03-19 01:13:40.546805 | orchestrator | 2026-03-19 01:13:36 | INFO  | Flavor SCS-1V-4-10 created 2026-03-19 01:13:40.546812 | orchestrator | 2026-03-19 01:13:37 | INFO  | Flavor SCS-1V-8 created 2026-03-19 01:13:40.546817 | orchestrator | 2026-03-19 01:13:37 | INFO  | Flavor SCS-1V-8-20 created 2026-03-19 01:13:40.546832 | orchestrator | 2026-03-19 01:13:37 | INFO  | Flavor SCS-2V-4 created 2026-03-19 01:13:40.546836 | orchestrator | 2026-03-19 01:13:37 | INFO  | Flavor SCS-2V-4-10 created 2026-03-19 01:13:40.546840 | orchestrator | 2026-03-19 01:13:37 | INFO  | Flavor SCS-2V-8 created 2026-03-19 01:13:40.546844 | orchestrator | 2026-03-19 01:13:37 | INFO  | Flavor SCS-2V-8-20 created 2026-03-19 01:13:40.546848 | orchestrator | 2026-03-19 01:13:37 | INFO  | Flavor SCS-2V-16 created 2026-03-19 01:13:40.546852 | orchestrator | 2026-03-19 01:13:37 | INFO  | Flavor SCS-2V-16-50 created 2026-03-19 01:13:40.546856 | orchestrator | 2026-03-19 01:13:38 | INFO  | Flavor SCS-4V-8 created 2026-03-19 01:13:40.546860 | orchestrator | 2026-03-19 01:13:38 | INFO  | Flavor SCS-4V-8-20 created 2026-03-19 01:13:40.546864 | orchestrator | 2026-03-19 01:13:38 | INFO  | Flavor SCS-4V-16 created 2026-03-19 01:13:40.546867 | orchestrator | 2026-03-19 01:13:38 | INFO  | Flavor SCS-4V-16-50 created 2026-03-19 01:13:40.546871 | orchestrator | 2026-03-19 01:13:38 | INFO  | Flavor SCS-4V-32 created 2026-03-19 01:13:40.546875 | orchestrator | 2026-03-19 01:13:38 | INFO  | Flavor SCS-4V-32-100 created 2026-03-19 01:13:40.546879 | orchestrator | 2026-03-19 01:13:38 | INFO  | Flavor SCS-8V-16 created 2026-03-19 01:13:40.546883 | orchestrator | 2026-03-19 01:13:38 | INFO  | Flavor SCS-8V-16-50 created 2026-03-19 01:13:40.546887 | orchestrator | 2026-03-19 01:13:39 | INFO  | Flavor SCS-8V-32 created 2026-03-19 01:13:40.546891 | orchestrator | 2026-03-19 01:13:39 | INFO  | Flavor SCS-8V-32-100 created 2026-03-19 01:13:40.546894 | orchestrator | 2026-03-19 01:13:39 | INFO  | Flavor SCS-16V-32 created 2026-03-19 01:13:40.546898 | orchestrator | 2026-03-19 01:13:39 | INFO  | Flavor SCS-16V-32-100 created 2026-03-19 01:13:40.546902 | orchestrator | 2026-03-19 01:13:39 | INFO  | Flavor SCS-2V-4-20s created 2026-03-19 01:13:40.546906 | orchestrator | 2026-03-19 01:13:39 | INFO  | Flavor SCS-4V-8-50s created 2026-03-19 01:13:40.546910 | orchestrator | 2026-03-19 01:13:40 | INFO  | Flavor SCS-4V-16-100s created 2026-03-19 01:13:40.546914 | orchestrator | 2026-03-19 01:13:40 | INFO  | Flavor SCS-8V-32-100s created 2026-03-19 01:13:42.055925 | orchestrator | 2026-03-19 01:13:42 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-19 01:13:52.200029 | orchestrator | 2026-03-19 01:13:52 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-19 01:13:52.279496 | orchestrator | 2026-03-19 01:13:52 | INFO  | Task 760b8ff1-4d1f-4a4d-9b0e-3e957f3cd8a8 (bootstrap-basic) was prepared for execution. 2026-03-19 01:13:52.279548 | orchestrator | 2026-03-19 01:13:52 | INFO  | It takes a moment until task 760b8ff1-4d1f-4a4d-9b0e-3e957f3cd8a8 (bootstrap-basic) has been started and output is visible here. 2026-03-19 01:14:37.579536 | orchestrator | 2026-03-19 01:14:37.579637 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-19 01:14:37.579650 | orchestrator | 2026-03-19 01:14:37.579657 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 01:14:37.579664 | orchestrator | Thursday 19 March 2026 01:13:55 +0000 (0:00:00.099) 0:00:00.099 ******** 2026-03-19 01:14:37.579671 | orchestrator | ok: [localhost] 2026-03-19 01:14:37.579679 | orchestrator | 2026-03-19 01:14:37.579685 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-19 01:14:37.579692 | orchestrator | Thursday 19 March 2026 01:13:57 +0000 (0:00:01.928) 0:00:02.028 ******** 2026-03-19 01:14:37.579701 | orchestrator | ok: [localhost] 2026-03-19 01:14:37.579705 | orchestrator | 2026-03-19 01:14:37.579709 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-19 01:14:37.579824 | orchestrator | Thursday 19 March 2026 01:14:06 +0000 (0:00:08.933) 0:00:10.962 ******** 2026-03-19 01:14:37.579832 | orchestrator | changed: [localhost] 2026-03-19 01:14:37.579839 | orchestrator | 2026-03-19 01:14:37.579845 | orchestrator | TASK [Create public network] *************************************************** 2026-03-19 01:14:37.579851 | orchestrator | Thursday 19 March 2026 01:14:14 +0000 (0:00:07.819) 0:00:18.782 ******** 2026-03-19 01:14:37.579858 | orchestrator | changed: [localhost] 2026-03-19 01:14:37.579864 | orchestrator | 2026-03-19 01:14:37.579875 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-19 01:14:37.579883 | orchestrator | Thursday 19 March 2026 01:14:19 +0000 (0:00:05.469) 0:00:24.251 ******** 2026-03-19 01:14:37.579890 | orchestrator | changed: [localhost] 2026-03-19 01:14:37.579896 | orchestrator | 2026-03-19 01:14:37.579903 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-19 01:14:37.579909 | orchestrator | Thursday 19 March 2026 01:14:25 +0000 (0:00:05.982) 0:00:30.234 ******** 2026-03-19 01:14:37.579916 | orchestrator | changed: [localhost] 2026-03-19 01:14:37.579923 | orchestrator | 2026-03-19 01:14:37.579929 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-19 01:14:37.579936 | orchestrator | Thursday 19 March 2026 01:14:29 +0000 (0:00:04.161) 0:00:34.396 ******** 2026-03-19 01:14:37.579942 | orchestrator | changed: [localhost] 2026-03-19 01:14:37.579948 | orchestrator | 2026-03-19 01:14:37.579954 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-19 01:14:37.579972 | orchestrator | Thursday 19 March 2026 01:14:33 +0000 (0:00:04.006) 0:00:38.402 ******** 2026-03-19 01:14:37.579979 | orchestrator | ok: [localhost] 2026-03-19 01:14:37.579985 | orchestrator | 2026-03-19 01:14:37.579992 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:14:37.579998 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:14:37.580006 | orchestrator | 2026-03-19 01:14:37.580012 | orchestrator | 2026-03-19 01:14:37.580018 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:14:37.580023 | orchestrator | Thursday 19 March 2026 01:14:37 +0000 (0:00:03.650) 0:00:42.053 ******** 2026-03-19 01:14:37.580029 | orchestrator | =============================================================================== 2026-03-19 01:14:37.580035 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.93s 2026-03-19 01:14:37.580099 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.82s 2026-03-19 01:14:37.580107 | orchestrator | Set public network to default ------------------------------------------- 5.98s 2026-03-19 01:14:37.580114 | orchestrator | Create public network --------------------------------------------------- 5.47s 2026-03-19 01:14:37.580120 | orchestrator | Create public subnet ---------------------------------------------------- 4.16s 2026-03-19 01:14:37.580126 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.01s 2026-03-19 01:14:37.580132 | orchestrator | Create manager role ----------------------------------------------------- 3.65s 2026-03-19 01:14:37.580138 | orchestrator | Gathering Facts --------------------------------------------------------- 1.93s 2026-03-19 01:14:39.513641 | orchestrator | 2026-03-19 01:14:39 | INFO  | It takes a moment until task 2f41765d-bfa4-4795-9b9f-ecf22810c154 (image-manager) has been started and output is visible here. 2026-03-19 01:14:42.365834 | orchestrator | Failed to contact the endpoint at https://api.testbed.osism.xyz:9292 for discovery. Fallback to using that endpoint as the base url. 2026-03-19 01:14:42.365904 | orchestrator | Failed to contact the endpoint at https://api.testbed.osism.xyz:9292 for discovery. Fallback to using that endpoint as the base url. 2026-03-19 01:14:42.365913 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2026-03-19 01:14:42.365920 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:131 │ 2026-03-19 01:14:42.365927 | orchestrator | │ in create_cli_args │ 2026-03-19 01:14:42.365933 | orchestrator | │ │ 2026-03-19 01:14:42.365940 | orchestrator | │ 128 │ │ logger.add(sys.stderr, format=log_fmt, level=level, colorize= │ 2026-03-19 01:14:42.365946 | orchestrator | │ 129 │ │ │ 2026-03-19 01:14:42.365953 | orchestrator | │ 130 │ │ if __name__ == "__main__" or __name__ == "openstack_image_man │ 2026-03-19 01:14:42.365959 | orchestrator | │ ❱ 131 │ │ │ self.main() │ 2026-03-19 01:14:42.365965 | orchestrator | │ 132 │ │ 2026-03-19 01:14:42.365972 | orchestrator | │ 133 │ def read_image_files(self, return_all_images=False) -> list: │ 2026-03-19 01:14:42.365978 | orchestrator | │ 134 │ │ """Read all YAML files in self.CONF.images""" │ 2026-03-19 01:14:42.365984 | orchestrator | │ │ 2026-03-19 01:14:42.365990 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:258 │ 2026-03-19 01:14:42.365995 | orchestrator | │ in main │ 2026-03-19 01:14:42.366001 | orchestrator | │ │ 2026-03-19 01:14:42.366007 | orchestrator | │ 255 │ │ else: │ 2026-03-19 01:14:42.366035 | orchestrator | │ 256 │ │ │ self.create_connection() │ 2026-03-19 01:14:42.366048 | orchestrator | │ 257 │ │ │ images = self.read_image_files() │ 2026-03-19 01:14:42.366054 | orchestrator | │ ❱ 258 │ │ │ managed_images = self.process_images(images) │ 2026-03-19 01:14:42.366060 | orchestrator | │ 259 │ │ │ │ 2026-03-19 01:14:42.366065 | orchestrator | │ 260 │ │ │ # ignore all non-specified images when using --filter │ 2026-03-19 01:14:42.366071 | orchestrator | │ 261 │ │ │ if self.CONF.filter: │ 2026-03-19 01:14:42.366077 | orchestrator | │ │ 2026-03-19 01:14:42.366083 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:375 │ 2026-03-19 01:14:42.366109 | orchestrator | │ in process_images │ 2026-03-19 01:14:42.366115 | orchestrator | │ │ 2026-03-19 01:14:42.366122 | orchestrator | │ 372 │ │ │ if "image_name" not in image["meta"]: │ 2026-03-19 01:14:42.366128 | orchestrator | │ 373 │ │ │ │ image["meta"]["image_name"] = image["name"] │ 2026-03-19 01:14:42.366134 | orchestrator | │ 374 │ │ │ │ 2026-03-19 01:14:42.366144 | orchestrator | │ ❱ 375 │ │ │ existing_images, imported_image, previous_image = self.pr │ 2026-03-19 01:14:42.366151 | orchestrator | │ 376 │ │ │ │ image, versions, sorted_versions, image["meta"].copy( │ 2026-03-19 01:14:42.366158 | orchestrator | │ 377 │ │ │ ) │ 2026-03-19 01:14:42.366164 | orchestrator | │ 378 │ │ │ managed_images = managed_images.union(existing_images) │ 2026-03-19 01:14:42.366171 | orchestrator | │ │ 2026-03-19 01:14:42.366178 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:548 │ 2026-03-19 01:14:42.366184 | orchestrator | │ in process_image │ 2026-03-19 01:14:42.366190 | orchestrator | │ │ 2026-03-19 01:14:42.366196 | orchestrator | │ 545 │ │ Returns: │ 2026-03-19 01:14:42.366202 | orchestrator | │ 546 │ │ │ Tuple with (existing_images, imported_image, previous_ima │ 2026-03-19 01:14:42.366208 | orchestrator | │ 547 │ │ """ │ 2026-03-19 01:14:42.366215 | orchestrator | │ ❱ 548 │ │ cloud_images = self.get_images() │ 2026-03-19 01:14:42.366221 | orchestrator | │ 549 │ │ │ 2026-03-19 01:14:42.366240 | orchestrator | │ 550 │ │ existing_images: Set[str] = set() │ 2026-03-19 01:14:42.366246 | orchestrator | │ 551 │ │ imported_image = None │ 2026-03-19 01:14:42.366253 | orchestrator | │ │ 2026-03-19 01:14:42.366276 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:469 │ 2026-03-19 01:14:42.366283 | orchestrator | │ in get_images │ 2026-03-19 01:14:42.366290 | orchestrator | │ │ 2026-03-19 01:14:42.366296 | orchestrator | │ 466 │ │ """ │ 2026-03-19 01:14:42.366303 | orchestrator | │ 467 │ │ result = {} │ 2026-03-19 01:14:42.366309 | orchestrator | │ 468 │ │ │ 2026-03-19 01:14:42.366315 | orchestrator | │ ❱ 469 │ │ for image in self.conn.image.images(): │ 2026-03-19 01:14:42.366322 | orchestrator | │ 470 │ │ │ if self.CONF.tag in image.tags and ( │ 2026-03-19 01:14:42.366329 | orchestrator | │ 471 │ │ │ │ image.visibility == "public" │ 2026-03-19 01:14:42.366335 | orchestrator | │ 472 │ │ │ │ or image.owner == self.conn.current_project_id │ 2026-03-19 01:14:42.366342 | orchestrator | │ │ 2026-03-19 01:14:42.366349 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack/service_description.py:91 │ 2026-03-19 01:14:42.366356 | orchestrator | │ in __get__ │ 2026-03-19 01:14:42.366369 | orchestrator | │ │ 2026-03-19 01:14:42.366380 | orchestrator | │ 88 │ │ if instance is None: │ 2026-03-19 01:14:42.366387 | orchestrator | │ 89 │ │ │ return self │ 2026-03-19 01:14:42.366394 | orchestrator | │ 90 │ │ if self.service_type not in instance._proxies: │ 2026-03-19 01:14:42.366400 | orchestrator | │ ❱ 91 │ │ │ proxy = self._make_proxy(instance) │ 2026-03-19 01:14:42.366407 | orchestrator | │ 92 │ │ │ if not isinstance(proxy, _ServiceDisabledProxyShim): │ 2026-03-19 01:14:42.366414 | orchestrator | │ 93 │ │ │ │ # The keystone proxy has a method called get_endpoint │ 2026-03-19 01:14:42.366421 | orchestrator | │ 94 │ │ │ │ # that is about managing keystone endpoints. This is │ 2026-03-19 01:14:42.366428 | orchestrator | │ │ 2026-03-19 01:14:42.366435 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack/service_description.py:293 │ 2026-03-19 01:14:42.366442 | orchestrator | │ in _make_proxy │ 2026-03-19 01:14:42.366449 | orchestrator | │ │ 2026-03-19 01:14:42.366455 | orchestrator | │ 290 │ │ if found_version is None: │ 2026-03-19 01:14:42.366462 | orchestrator | │ 291 │ │ │ region_name = instance.config.get_region_name(self.service │ 2026-03-19 01:14:42.366469 | orchestrator | │ 292 │ │ │ if version_kwargs: │ 2026-03-19 01:14:42.366476 | orchestrator | │ ❱ 293 │ │ │ │ raise exceptions.NotSupported( │ 2026-03-19 01:14:42.366482 | orchestrator | │ 294 │ │ │ │ │ f"The {self.service_type} service for " │ 2026-03-19 01:14:42.366489 | orchestrator | │ 295 │ │ │ │ │ f"{instance.name}:{region_name} exists but does no │ 2026-03-19 01:14:42.366496 | orchestrator | │ 296 │ │ │ │ │ f"any supported versions." │ 2026-03-19 01:14:42.366510 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2026-03-19 01:14:42.366519 | orchestrator | NotSupported: The image service for admin: exists but does not have any 2026-03-19 01:14:42.366527 | orchestrator | supported versions. 2026-03-19 01:14:42.880281 | orchestrator | ERROR 2026-03-19 01:14:42.880682 | orchestrator | { 2026-03-19 01:14:42.880791 | orchestrator | "delta": "0:01:15.893582", 2026-03-19 01:14:42.880863 | orchestrator | "end": "2026-03-19 01:14:42.589905", 2026-03-19 01:14:42.880925 | orchestrator | "msg": "non-zero return code", 2026-03-19 01:14:42.880981 | orchestrator | "rc": 1, 2026-03-19 01:14:42.881034 | orchestrator | "start": "2026-03-19 01:13:26.696323" 2026-03-19 01:14:42.881111 | orchestrator | } failure 2026-03-19 01:14:42.895862 | 2026-03-19 01:14:42.895997 | PLAY RECAP 2026-03-19 01:14:42.896213 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2026-03-19 01:14:42.896254 | 2026-03-19 01:14:43.106143 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-03-19 01:14:43.108507 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-19 01:14:43.896705 | 2026-03-19 01:14:43.896861 | PLAY [Post output play] 2026-03-19 01:14:43.912808 | 2026-03-19 01:14:43.912959 | LOOP [stage-output : Register sources] 2026-03-19 01:14:43.966159 | 2026-03-19 01:14:43.966378 | TASK [stage-output : Check sudo] 2026-03-19 01:14:44.787099 | orchestrator | sudo: a password is required 2026-03-19 01:14:45.002483 | orchestrator | ok: Runtime: 0:00:00.009846 2026-03-19 01:14:45.016601 | 2026-03-19 01:14:45.016790 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-19 01:14:45.056408 | 2026-03-19 01:14:45.056702 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-19 01:14:45.128213 | orchestrator | ok 2026-03-19 01:14:45.137157 | 2026-03-19 01:14:45.137308 | LOOP [stage-output : Ensure target folders exist] 2026-03-19 01:14:45.607576 | orchestrator | ok: "docs" 2026-03-19 01:14:45.607856 | 2026-03-19 01:14:45.806999 | orchestrator | ok: "artifacts" 2026-03-19 01:14:46.028131 | orchestrator | ok: "logs" 2026-03-19 01:14:46.048897 | 2026-03-19 01:14:46.049099 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-19 01:14:46.089029 | 2026-03-19 01:14:46.089338 | TASK [stage-output : Make all log files readable] 2026-03-19 01:14:46.332770 | orchestrator | ok 2026-03-19 01:14:46.347719 | 2026-03-19 01:14:46.347887 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-19 01:14:46.383502 | orchestrator | skipping: Conditional result was False 2026-03-19 01:14:46.392468 | 2026-03-19 01:14:46.392585 | TASK [stage-output : Discover log files for compression] 2026-03-19 01:14:46.417866 | orchestrator | skipping: Conditional result was False 2026-03-19 01:14:46.430501 | 2026-03-19 01:14:46.430669 | LOOP [stage-output : Archive everything from logs] 2026-03-19 01:14:46.484006 | 2026-03-19 01:14:46.484235 | PLAY [Post cleanup play] 2026-03-19 01:14:46.494293 | 2026-03-19 01:14:46.494421 | TASK [Set cloud fact (Zuul deployment)] 2026-03-19 01:14:46.562859 | orchestrator | ok 2026-03-19 01:14:46.575048 | 2026-03-19 01:14:46.575223 | TASK [Set cloud fact (local deployment)] 2026-03-19 01:14:46.609847 | orchestrator | skipping: Conditional result was False 2026-03-19 01:14:46.625731 | 2026-03-19 01:14:46.625895 | TASK [Clean the cloud environment] 2026-03-19 01:14:47.206179 | orchestrator | 2026-03-19 01:14:47 - clean up servers 2026-03-19 01:14:47.982242 | orchestrator | 2026-03-19 01:14:47 - testbed-manager 2026-03-19 01:14:48.081928 | orchestrator | 2026-03-19 01:14:48 - testbed-node-0 2026-03-19 01:14:48.192267 | orchestrator | 2026-03-19 01:14:48 - testbed-node-5 2026-03-19 01:14:48.280960 | orchestrator | 2026-03-19 01:14:48 - testbed-node-1 2026-03-19 01:14:48.376562 | orchestrator | 2026-03-19 01:14:48 - testbed-node-3 2026-03-19 01:14:48.461768 | orchestrator | 2026-03-19 01:14:48 - testbed-node-2 2026-03-19 01:14:48.548153 | orchestrator | 2026-03-19 01:14:48 - testbed-node-4 2026-03-19 01:14:48.653204 | orchestrator | 2026-03-19 01:14:48 - clean up keypairs 2026-03-19 01:14:48.670260 | orchestrator | 2026-03-19 01:14:48 - testbed 2026-03-19 01:14:48.692985 | orchestrator | 2026-03-19 01:14:48 - wait for servers to be gone 2026-03-19 01:15:01.693012 | orchestrator | 2026-03-19 01:15:01 - clean up ports 2026-03-19 01:15:01.872695 | orchestrator | 2026-03-19 01:15:01 - 35b267e8-eb00-430d-98ae-9b40abe5569c 2026-03-19 01:15:02.117616 | orchestrator | 2026-03-19 01:15:02 - 40bf435a-86d8-4c26-8e12-2054ab23e7cd 2026-03-19 01:15:02.528350 | orchestrator | 2026-03-19 01:15:02 - 59b61709-f904-41e7-a0c6-77cc95b5d1e9 2026-03-19 01:15:02.760451 | orchestrator | 2026-03-19 01:15:02 - 5f35c046-ac7e-465e-a185-75f26baae30b 2026-03-19 01:15:02.986373 | orchestrator | 2026-03-19 01:15:02 - 93e046f6-133e-4221-af90-e5da59af8a2f 2026-03-19 01:15:03.915741 | orchestrator | 2026-03-19 01:15:03 - d8f70375-a310-4c5d-8bc1-80bd443edc83 2026-03-19 01:15:04.143945 | orchestrator | 2026-03-19 01:15:04 - df14e4e7-43ef-4965-9107-d73f41ae9c2b 2026-03-19 01:15:04.366080 | orchestrator | 2026-03-19 01:15:04 - clean up volumes 2026-03-19 01:15:04.482737 | orchestrator | 2026-03-19 01:15:04 - testbed-volume-4-node-base 2026-03-19 01:15:04.520532 | orchestrator | 2026-03-19 01:15:04 - testbed-volume-0-node-base 2026-03-19 01:15:04.566398 | orchestrator | 2026-03-19 01:15:04 - testbed-volume-2-node-base 2026-03-19 01:15:04.611096 | orchestrator | 2026-03-19 01:15:04 - testbed-volume-5-node-base 2026-03-19 01:15:04.655060 | orchestrator | 2026-03-19 01:15:04 - testbed-volume-1-node-base 2026-03-19 01:15:04.696565 | orchestrator | 2026-03-19 01:15:04 - testbed-volume-3-node-base 2026-03-19 01:15:04.737168 | orchestrator | 2026-03-19 01:15:04 - testbed-volume-7-node-4 2026-03-19 01:15:04.777006 | orchestrator | 2026-03-19 01:15:04 - testbed-volume-manager-base 2026-03-19 01:15:04.820041 | orchestrator | 2026-03-19 01:15:04 - testbed-volume-1-node-4 2026-03-19 01:15:05.003833 | orchestrator | 2026-03-19 01:15:05 - testbed-volume-6-node-3 2026-03-19 01:15:05.042840 | orchestrator | 2026-03-19 01:15:05 - testbed-volume-8-node-5 2026-03-19 01:15:05.082809 | orchestrator | 2026-03-19 01:15:05 - testbed-volume-3-node-3 2026-03-19 01:15:05.123978 | orchestrator | 2026-03-19 01:15:05 - testbed-volume-0-node-3 2026-03-19 01:15:05.166276 | orchestrator | 2026-03-19 01:15:05 - testbed-volume-4-node-4 2026-03-19 01:15:05.207630 | orchestrator | 2026-03-19 01:15:05 - testbed-volume-5-node-5 2026-03-19 01:15:05.247830 | orchestrator | 2026-03-19 01:15:05 - testbed-volume-2-node-5 2026-03-19 01:15:05.286292 | orchestrator | 2026-03-19 01:15:05 - disconnect routers 2026-03-19 01:15:05.404137 | orchestrator | 2026-03-19 01:15:05 - testbed 2026-03-19 01:15:06.481426 | orchestrator | 2026-03-19 01:15:06 - clean up subnets 2026-03-19 01:15:06.540868 | orchestrator | 2026-03-19 01:15:06 - subnet-testbed-management 2026-03-19 01:15:06.698414 | orchestrator | 2026-03-19 01:15:06 - clean up networks 2026-03-19 01:15:06.885853 | orchestrator | 2026-03-19 01:15:06 - net-testbed-management 2026-03-19 01:15:07.180447 | orchestrator | 2026-03-19 01:15:07 - clean up security groups 2026-03-19 01:15:07.229977 | orchestrator | 2026-03-19 01:15:07 - testbed-node 2026-03-19 01:15:07.333968 | orchestrator | 2026-03-19 01:15:07 - testbed-management 2026-03-19 01:15:07.537229 | orchestrator | 2026-03-19 01:15:07 - clean up floating ips 2026-03-19 01:15:07.570782 | orchestrator | 2026-03-19 01:15:07 - 81.163.193.218 2026-03-19 01:15:07.917102 | orchestrator | 2026-03-19 01:15:07 - clean up routers 2026-03-19 01:15:08.030321 | orchestrator | 2026-03-19 01:15:08 - testbed 2026-03-19 01:15:09.689636 | orchestrator | ok: Runtime: 0:00:22.571619 2026-03-19 01:15:09.693910 | 2026-03-19 01:15:09.694102 | PLAY RECAP 2026-03-19 01:15:09.694236 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-19 01:15:09.694301 | 2026-03-19 01:15:09.849998 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-19 01:15:09.851377 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-19 01:15:10.579569 | 2026-03-19 01:15:10.579767 | PLAY [Cleanup play] 2026-03-19 01:15:10.600634 | 2026-03-19 01:15:10.600778 | TASK [Set cloud fact (Zuul deployment)] 2026-03-19 01:15:10.656604 | orchestrator | ok 2026-03-19 01:15:10.665458 | 2026-03-19 01:15:10.665604 | TASK [Set cloud fact (local deployment)] 2026-03-19 01:15:10.699749 | orchestrator | skipping: Conditional result was False 2026-03-19 01:15:10.708972 | 2026-03-19 01:15:10.709106 | TASK [Clean the cloud environment] 2026-03-19 01:15:11.946841 | orchestrator | 2026-03-19 01:15:11 - clean up servers 2026-03-19 01:15:12.529512 | orchestrator | 2026-03-19 01:15:12 - clean up keypairs 2026-03-19 01:15:12.546586 | orchestrator | 2026-03-19 01:15:12 - wait for servers to be gone 2026-03-19 01:15:12.593524 | orchestrator | 2026-03-19 01:15:12 - clean up ports 2026-03-19 01:15:12.667887 | orchestrator | 2026-03-19 01:15:12 - clean up volumes 2026-03-19 01:15:12.743061 | orchestrator | 2026-03-19 01:15:12 - disconnect routers 2026-03-19 01:15:12.778808 | orchestrator | 2026-03-19 01:15:12 - clean up subnets 2026-03-19 01:15:12.799919 | orchestrator | 2026-03-19 01:15:12 - clean up networks 2026-03-19 01:15:13.004279 | orchestrator | 2026-03-19 01:15:13 - clean up security groups 2026-03-19 01:15:13.046987 | orchestrator | 2026-03-19 01:15:13 - clean up floating ips 2026-03-19 01:15:13.075568 | orchestrator | 2026-03-19 01:15:13 - clean up routers 2026-03-19 01:15:13.256978 | orchestrator | ok: Runtime: 0:00:01.653895 2026-03-19 01:15:13.259115 | 2026-03-19 01:15:13.259213 | PLAY RECAP 2026-03-19 01:15:13.259279 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-19 01:15:13.259325 | 2026-03-19 01:15:13.377520 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-19 01:15:13.380175 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-19 01:15:14.153043 | 2026-03-19 01:15:14.153248 | PLAY [Base post-fetch] 2026-03-19 01:15:14.169998 | 2026-03-19 01:15:14.170165 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-19 01:15:14.228190 | orchestrator | skipping: Conditional result was False 2026-03-19 01:15:14.239321 | 2026-03-19 01:15:14.239522 | TASK [fetch-output : Set log path for single node] 2026-03-19 01:15:14.300562 | orchestrator | ok 2026-03-19 01:15:14.309171 | 2026-03-19 01:15:14.309322 | LOOP [fetch-output : Ensure local output dirs] 2026-03-19 01:15:14.805708 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/ed649bb755a64e3ab8e84305576b127a/work/logs" 2026-03-19 01:15:15.091894 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/ed649bb755a64e3ab8e84305576b127a/work/artifacts" 2026-03-19 01:15:15.364352 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/ed649bb755a64e3ab8e84305576b127a/work/docs" 2026-03-19 01:15:15.390684 | 2026-03-19 01:15:15.390894 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-19 01:15:16.369628 | orchestrator | changed: .d..t...... ./ 2026-03-19 01:15:16.369909 | orchestrator | changed: All items complete 2026-03-19 01:15:16.369949 | 2026-03-19 01:15:17.059878 | orchestrator | changed: .d..t...... ./ 2026-03-19 01:15:17.801267 | orchestrator | changed: .d..t...... ./ 2026-03-19 01:15:17.835620 | 2026-03-19 01:15:17.835775 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-19 01:15:17.869412 | orchestrator | skipping: Conditional result was False 2026-03-19 01:15:17.875366 | orchestrator | skipping: Conditional result was False 2026-03-19 01:15:17.890957 | 2026-03-19 01:15:17.891088 | PLAY RECAP 2026-03-19 01:15:17.891163 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-19 01:15:17.891201 | 2026-03-19 01:15:18.014249 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-19 01:15:18.015660 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-19 01:15:18.760342 | 2026-03-19 01:15:18.760516 | PLAY [Base post] 2026-03-19 01:15:18.788923 | 2026-03-19 01:15:18.789157 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-19 01:15:19.850934 | orchestrator | changed 2026-03-19 01:15:19.865669 | 2026-03-19 01:15:19.865817 | PLAY RECAP 2026-03-19 01:15:19.865907 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-19 01:15:19.865998 | 2026-03-19 01:15:20.004360 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-19 01:15:20.007109 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-19 01:15:20.812500 | 2026-03-19 01:15:20.812670 | PLAY [Base post-logs] 2026-03-19 01:15:20.823957 | 2026-03-19 01:15:20.824145 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-19 01:15:21.298089 | localhost | changed 2026-03-19 01:15:21.308126 | 2026-03-19 01:15:21.308274 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-19 01:15:21.345400 | localhost | ok 2026-03-19 01:15:21.351356 | 2026-03-19 01:15:21.351531 | TASK [Set zuul-log-path fact] 2026-03-19 01:15:21.368879 | localhost | ok 2026-03-19 01:15:21.379359 | 2026-03-19 01:15:21.379474 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-19 01:15:21.405957 | localhost | ok 2026-03-19 01:15:21.410302 | 2026-03-19 01:15:21.410440 | TASK [upload-logs : Create log directories] 2026-03-19 01:15:21.919635 | localhost | changed 2026-03-19 01:15:21.924572 | 2026-03-19 01:15:21.924716 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-19 01:15:22.420033 | localhost -> localhost | ok: Runtime: 0:00:00.007219 2026-03-19 01:15:22.429815 | 2026-03-19 01:15:22.430039 | TASK [upload-logs : Upload logs to log server] 2026-03-19 01:15:23.031486 | localhost | Output suppressed because no_log was given 2026-03-19 01:15:23.034792 | 2026-03-19 01:15:23.035047 | LOOP [upload-logs : Compress console log and json output] 2026-03-19 01:15:23.095564 | localhost | skipping: Conditional result was False 2026-03-19 01:15:23.100642 | localhost | skipping: Conditional result was False 2026-03-19 01:15:23.109508 | 2026-03-19 01:15:23.109732 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-19 01:15:23.155406 | localhost | skipping: Conditional result was False 2026-03-19 01:15:23.155980 | 2026-03-19 01:15:23.159436 | localhost | skipping: Conditional result was False 2026-03-19 01:15:23.172709 | 2026-03-19 01:15:23.172931 | LOOP [upload-logs : Upload console log and json output]